https://www.inteliment.com/wp-content/uploads/2021/05/header-logo-reverse-3-1.png 0 0 user https://www.inteliment.com/wp-content/uploads/2021/05/header-logo-reverse-3-1.png user2022-04-11 15:30:162022-04-11 15:32:26Data Engineer
ROLE : DATA ENGINEER
ROLE : DATA ENGINEER
EXPERIENCE : 3 TO 5 YEAR
Graduation / Post Graduation : Specialization in Computer Science, Software Engineering, Business Analytics etc.
- Designing and developing scalable ETL scripts from the business source systems and the development of ETL routines to populate databases from sources and to create aggregates.
- Responsible for performing thorough testing and validation to support the accuracy of data transformations and data verification
- Suggest & implement best practices for performance tuning while working on the larger datasets.
- Development and implementation of scripts for database maintenance, monitoring, performance tuning, and so forth.
- Ensure proper data governance and quality of the data.
- Define standard data management principles and policies for retention and archival.
- Troubleshoots data issues within the business and across the business and presents solutions to these issues
- Analyse complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual physical and logical data models.
- Provide real time knowledge transfer to the team on the Requirements / Design & Development.
- Manage the infrastructure & deployment of the release artefacts by coordinating with respective teams.
- Work in an agile environment with the defined sprints to deliver the assigned work in the stipulated timelines.
- Excellence in implementing, configuring and using Big Data technologies, including Hadoop, Spark, Solr, Kafka, Hive and Impala, and Cloud Platform such as AWS and Azure
- Adhere to software development best practices and coding standards in all work products and participate in the refinement of those practices and standards to improve quality and productivity.
- Design & development of an event-processing pipeline that can handle millions of events.
- Minimum 3 years of hands-on experience in data management & engineering environment.
- Design & development of a data-processing pipeline that can handle millions of rows.
- Design an innovative methodology to extract information from data.
- Strong knowledge of core software technologies and fundamentals – specifically for large-scale distributed systems – and building highly available services.
- Ability to go across the full product development lifecycle – from design and development to testing and deployment, to running large-scale, highly-available services in production.
- Preferred having a fundamental knowledge of scheduling, synchronization, IPC and memory management.
- Familiarity with code versioning tools such as Git, SVN and Mercurial.
- Understanding of Agile & Scrum development methodology.
- Must to have the Knowledge of AWS Services.
- Demonstrated clear and thorough logical and analytical thinking, as well as problem-solving skills
- Self-directed, ability to work independently and research innovative solutions to business problems
- Must be flexible to travel on site if required.
- Effective interpersonal communication across various levels of the organization.
- Ability to interpret, evaluate and communicate detailed information in a manner that is appropriate to the audience.
- Ability to conduct root cause analysis and performance tuning for complex business processes and functionality.
- Ability to interact with IT and business users across the organization to resolve issues and provide solutions in a timely manner.
- Should be proactive & transparent in the in the deliverables & critical thinker while designing the solutions.
- Team oriented and enjoys working in a collaborative development environment.
Tools & Technologies
- Code Management: Git, SVN
- Operating System: Mac, Linux, Windows
- Databases Technologies: SQL & NoSQL Databases (Postgres, MongoDB, AWS S3, Redis) etc.
- Cloud: AWS