Big Data Architect


  • Dallas, TX
  • Contract
  • Big Data Visualization

  • Azure

  • Agile

  • Linux

  • Job Posted: Monday, August 06, 2018

  • Expires On: Wednesday, September 05, 2018

  • Job Type: Contract

  • Duration: 1 month

  • Desired Experience: More than 10 years

  • Work Authorization: Resident/Citizen, Citizen, H1-B

Senior Enterprise/Big Data Engineer


Data software development, interfacing and configuration


Builds the software and configuration that implement what the Enterprise/Big Data Architect has

designed. This role develops, maintains, tests and evaluates big data solutions within the organization.

Frequently involved in the design of big data solutions, because of the experience they have with

specific tools, frameworks and languages.


Data software development

Data visualization and reporting software development

Data transport development and configuration

Data purge and archive

Data systems integration and API design and development

Developing, testing and reviewing code and providing feedback

Establishing & Implementing Big Data best practices

Reviewing requirements and making sure they are functional and usable

Perform all phases of software engineering including requirements analysis, application design, code

development and testing

Design reusable components, frameworks and libraries

Work very closely with architecture groups and drive solutions

Participate in an Agile/Scrum methodology to deliver high-quality software releases every 2 weeks

through Sprints

Design and develop innovative solutions to meet the needs of the business

Review code and provide feedback relative to best practices and improving performance

Troubleshoot production support issues post-deployment and come up with solutions as required

Mentor and guide other software engineers within the team

Data transformation development

Technical Skills

Big Data, Enterprise Data, Distributed Data

SQL/NOSQL experience with Database such as Microsoft SQL Server, DB2, Teradata, Oracle, Aurora,

PostgreSQL, MySQL, Cassandra, MongoDb etc

A strong understanding of software development life cycle and methodologies such as Agile/Waterfall

Experience with object-oriented/object function scripting languages: Python, Java, C#, Scala, etc.

Strong understanding of big data technologies, such as Hadoop, MapReduce, Spark, Yarn, Hive, Pig,

Presto, Storm etc

Strong experience with ETL/ELT tools (Informatica, Talend, Pentaho, ODI)

Strong knowledge of APIs, specifically REST APIs, SDKs and CLI tools

AWS knowledge as applied to big data applications

Data security/privacy, including PCI

Troubleshooting performance issue resolution

Thorough understanding of technological infrastructure and how it relates to projects

Experience working in a DevOps model using Agile

Experience with CI/CD with Jenkins pipelines, OpenShift, Gradle, GitHub and Docker

Test Automation

DevOps knowledge is a nice to have

Other Skills

Both creative and analytic approaches in a problem-solving environment

Excellent written and verbal communication skills

Communicating with both technical and non-technical collaborators

Excellent teamwork and collaboration skills

Experience and Education

6+ years of experience in object oriented programming

5+ years of experience in large development initiatives involving big data

5+ years of experience in developing high volume database applications

3+ years of experience with complex shell scripting using Linux

2+ years of experience in developing distributed computing systems applications using solutions such

as Hadoop, Hbase, Hive, Java/MapReduce, Spark, Scala, Storm, Kafka, Flume, Sqoop & Pig

2+ years of agile experience

2 years cloud experience (AWS, Azure, Containers, etc.)

Bachelors or Masters in Computer Science or Software Engineering