Big Data Engineer – Entry Level

New Job Opportunity
We are actively hiring for a new role.
Apply Now

🏢 Career.zycto📍 Granville, Sydney💼 Full-Time💻 On-site🏭 Information Technology & Services💰 AUD 60,000 - AUD 75,000 per year

About Company

⚠ Job Safety Notice: We actively monitors listings to prevent scams, misleading, or unauthorized postings in line with PhishFort anti-phishing policies. If you spot a suspicious listing, submit a Job Takedown Request immediately for review and appropriate removal action.

Career.zycto is a dynamic force in leveraging data-driven insights to solve complex business challenges. We thrive on innovation, pushing the boundaries of what’s possible with big data technologies. For an aspiring Big Data Engineer, our collaborative environment offers an unparalleled launchpad. You’ll work alongside seasoned experts, gaining hands-on experience with cutting-edge tools and real-world datasets from day one. We believe in nurturing talent, providing robust mentorship, and fostering a culture where fresh perspectives are celebrated. Join us to build a foundational career in a company that values growth, learning, and making a tangible impact in the data landscape.

Job Description

Are you a recent graduate or an aspiring data enthusiast eager to kickstart your career in the expansive world of big data? Career.zycto is seeking a passionate and driven Entry Level Big Data Engineer to join our innovative team in Granville, Sydney. This is an incredible opportunity to immerse yourself in cutting-edge data technologies, learn from industry experts, and contribute to impactful projects that leverage vast datasets to drive business intelligence.

At Career.zycto, we believe in nurturing talent and providing a robust environment for professional growth. As an Entry Level Big Data Engineer, you won’t just be an observer; you’ll be an active participant in designing, building, and maintaining scalable data pipelines and infrastructure. You will work closely with senior engineers and data scientists, gaining invaluable hands-on experience with platforms like Hadoop, Spark, and various cloud-based big data services. This role is perfect for someone with a strong foundational understanding of programming, databases, and a burning desire to master the complexities of distributed systems and large-scale data processing.

We are looking for individuals who are not afraid to ask questions, take initiative, and continuously learn in a fast-paced and collaborative setting. Your contributions will directly support our mission to transform raw data into actionable insights, helping our clients make smarter, data-driven decisions. This position offers a unique chance to develop core big data engineering skills, understand the full lifecycle of data projects, and evolve into a key contributor within our growing data team. If you’re ready to lay a solid foundation for a successful career in big data and contribute to meaningful projects, we encourage you to apply and grow with us.

Key Responsibilities

  • Assist in the design, development, and maintenance of scalable data pipelines using big data technologies.
  • Learn and apply best practices for data ingestion, processing, and storage.
  • Collaborate with senior engineers to troubleshoot and optimize existing data infrastructure.
  • Write clear, concise, and well-documented code for data processing jobs.
  • Participate in data quality assurance and monitoring activities.
  • Contribute to the research and evaluation of new big data tools and technologies.
  • Support data scientists and analysts by ensuring data availability and reliability.
  • Engage in continuous learning to stay updated with industry trends and emerging technologies.

Required Skills

  • Proficiency in at least one programming language (e.g., Python, Java, Scala).
  • Solid understanding of SQL and relational databases.
  • Basic knowledge of big data concepts and ecosystems (e.g., Hadoop, Spark).
  • Familiarity with data structures, algorithms, and object-oriented programming.
  • Strong problem-solving and analytical abilities.
  • Excellent communication and teamwork skills.
  • Bachelor's degree in Computer Science, Engineering, Data Science, or a related technical field.

Preferred Qualifications

  • Experience with cloud platforms (e.g., AWS, Azure, GCP) and their big data services.
  • Familiarity with version control systems (e.g., Git).
  • Understanding of data warehousing concepts.
  • Prior academic projects or internships involving large datasets.

Perks & Benefits

  • Competitive salary package and performance bonuses.
  • Comprehensive health and wellness programs.
  • Mentorship and professional development opportunities.
  • Dedicated learning and development budget for courses and certifications.
  • Flexible work arrangements (where applicable).
  • Collaborative and supportive team environment.
  • Opportunity to work with cutting-edge technologies.
  • Employee assistance program.

How to Apply

Interested candidates are encouraged to click on the application link below to submit their resume and cover letter. Please highlight your passion for big data and any relevant projects or coursework.

Job Application

×
Scroll to Top