• Develop large scale data structures and pipelines to organize, collect, and standardize data that helps generate insights and addresses reporting needs
• Write ETL (Extract/Transform/Load) processes, designs database systems, and develops tools for real-time and offline analytic processing
• Build data marts and data models
• Integrate data from a variety of sources (Oracle, Postgres, Flat Files, etc), assuring they adhere to data quality and accessibility standards, particularly for Master Data Management
• Analyze current database environments to identify and assess critical capabilities and recommend solutions.
Required Skills and Experience:
• Possess strong skills in data modeling
• DB query performance tuning, advanced SQL programming
• Data collection, curation, preparation, and transformation
• Data warehousing/dimension modelling, reporting, and business intelligence
• Strong working knowledge of Postgres or relevant experience with another RDBMS (Oracle, MS SQL)
• Experience with bash shell scripts, UNIX utilities & UNIX Commands
• Good spoken English.
• Experience with NoSQL systems (Cassandra, Couchbase, Mongo)
• Working knowledge of MPP / Columnar Databases (Redshift / Snowflake)
• Highly proficient with cloud based data technologies in the AWS technology stack (EMR, S3, Redshift, TEZ, HIVE, Presto, Spark, Data Pipelines, DMS, EC2, SWF, Postgres RDS, Postgres Aurora, Qubole, DynamoDB).