Senior Data Architect
Collaborate within a team of technologists to produce enterprise scale solutions for our clients’ needs. Must have solution architecture experience across the tech stack and deeper understanding of AWS Redshift, AWS Glue, AWS Athena, AWS S3, Hadoop/Big Data, Collibra, Talend, Cassandra and Spark
- Be the primary liaison between clients, vendors and clienta teams to drive data integration requirements to help realize customer data platform and campaign rollout initiatives.
- Serve as technical resource during the software development lifecycle to solve business issues through the process of identifying and analyzing detailed requirements that translate into data integration and database system designs
- Define and build the ETL and data pipelines to enable faster, better, data-informed decision-making within the business
- Transform data into actionable information and convert analytical results into a format that is easy to draw insights from and to share with colleagues and peers.
- Implement solutions to enable a stable architecture for capturing robust data sets
- Responsible for the maintenance, improvement, cleaning, and manipulation of data in the customer data platform and analytics databases
- Work with data analytics teams, data scientists, and software engineers in order to understand and aid in the implementation of database requirements, analyze performance, and troubleshoot any issues
- Participate in integrated test sessions of components and subsystems on test and production servers
- Develop and present solutions and recommendations to clients and internal teams
- Use information gained through prior experience, knowledge sharing with other Technology Associates, education and training to resolve issues and remove project barriers
- Deliver quality work on defined tasks with little oversight and direction
- Ensure all deliverables are of high quality throughout the project by adhering to coding standards, best practices and participating in code reviews
- Provide status updates to team members on a regular basis and clearly escalate issues and risks to project management as needed
- Understanding of data modeling, data lakes, data driven insight, data quality and governance, data security, access control mechanism
- Ability understand and decipher business value, experience in large scale data project in agile delivery model
- Practical project experience with AWS Tech Stack –AWS Redshift, AWS Glue, AWS Athena, AWS S3, AWS Aurora DB , AWS Lambda, AWS IAM, AWS Cloudwatch, AWS KMS, AWS Cloudtrail, Redshift Specturm
- Experience in Hadoop/Big data, Collibra, Talend, Cassandra, Spark, Solr, Tableau
- BS in Computer Science or equivalent education/professional experience is required.
- 10+ years in a data-engineering role with demonstrated experience in data integration and data warehouse projects.
- Experience architecting and building data warehouses, customer profile databases, data marts, etc.
- Experience in data integration projects and automation via ETL Tools (i.e. Talend, Informatica, SSIS, etc.).
- Experience in Hadoop (Hive, Spark, Impala, etc.) ecosystem
- GCP and DE Certified with knowledge of Apache Beam and programming languages including Java and Python
- Experience with data modeling, warehouse design, and fact/dimension concepts
- Experience working with different query languages (i.e. T-SQL, PostgreSQL, PL-SQL)
- Experience working with code repositories and continuous integration (i.e. Git, Jenkins, etc.)
- Understanding of development and project methodologies
- Ability to work collaboratively in teams with other specialized individuals
- Able to work in a fast-paced, technical environment
- Good verbal and written communication skills