Primary Duties and Responsibilities
Press space or enter keys to toggle section visibility
As a Data Engineer on the Data Architecture team, you will play a key role in technology initiatives to advance health informatics and analytics in the health sciences by advancing the usability, performance, and overall architecture of the Data Infrastructure. You will develop reliable, and large-scale data processing pipelines, Foundational architectural components, reusable frameworks, and Data Models to support Enterprise Data warehouse, Data Lakes, Feature Stores and Machine Learning Platform. Involves technical acumen for planning, designing, developing, implementing, and administering data-based systems that acquire, prepare, store, and provide access to data and metadata. Maintains and optimizes systems and migrates data and systems as needed. Ensures integrity and completeness of data and workflow, manages and / or develops data practices, databases, and information systems as well as guidelines, dictionaries, registries and / or services. May include interpretation of scientific research data artifacts as well as mediation across science and technology domains and long-term data care. As information architect and data steward, designs systems, data products and / or data production processes while focusing on data curation, data exchange, data security, data integrity and information environments. (Re)evaluates frameworks, strategies, standards, and standards-making activities. May involve work with a project-level data repository, a center, or an archive.
You will be part of the team building UCLA Health’s Data Platform and products and is a unique opportunity to be part of advancing analytics for one of the nation’s leading Healthcare organizations where Big Data will be used as a platform to build solutions.
|
Job Qualifications
Press space or enter keys to toggle section visibility
- Minimum five years of software development experience.
- 2+ years experience on the data or backend systems side of the software development.
- Strong industry experience in programming languages such as Python with the ability to pick up new languages and technologies quickly.
- Experience with Orchestration tools like Airflow is required.
- Strong experience with Relational databases like SQL Server or Oracle is required.
- Strong background in Data warehousing and ETL principles, architecture, and its implementation in large environments.
- Experience working with Machine Learning Systems like Databricks, Feature Stores, MLOps is preferred.
- Working knowledge on leading cloud platforms like Azure, AWS, GCP; Microsoft Azure experience is preferred.
- Bachelor’s degree in computer science, Computer Engineering, or related field from an accredited college or university
- Master’s Degree preferred.