Principal Architect - Data Engineer

  • Job Reference: 83286922-2
  • Date Posted: 23 March 2019
  • Recruiter: Amgen
  • Location: Tampa, Florida
  • Salary: On Application
  • Sector: Engineering
  • Job Type: Permanent

Job Description

Amgen is seeking a Principal Architect - Big Data Engineering, services to join Analytics and Knowledge Management (AKM) team and will be responsible for implementing and managing the applications and platforms to support Amgen's Big Data strategy and roadmap. We are looking for an individual who is up to the challenge of blending the fast-changing technology landscape of Big Data analytics with the complex and high-impact space of pharmaceutical analytics. The Architect will report to the Senior Manager Information Systems and will work out of our Amgen Capability Center in Tampa, FL.

At Amgen, our mission is simple: to serve patients. Our new Tampa Capability Center provides essential services that enable us to better pursue this mission. This state-of-the-art center serves as a base for finance, information systems, and human resources professionals to make a meaningful impact at one of the world's leading biotechnology companies.

The Principal Architect - Big Data Engineering is responsible for design, implementation and support of related platforms to drive Amgen's services/platforms for Big Data and analytics into next generation technologies. The incumbent will define the architecture and technical road maps to align with the overall strategy as well as manage and lead the administration of platform lifecycle, develop success metrics, identify redundancies and dependencies on other services, and drive service and release targets while managing service delivery teams.

  • Responsible for architecture, delivery and support of data lake platform
  • Implement and manage platform architecture and technical road maps that align with the Amgen Data strategy
  • Design, develop, test, implement, and support of enterprise shared components using technologies like Python / Angular / AWS and Spark
  • Work closely with Engagement Managers to drive platform adoption with various functional and application groups
  • Use effective written/verbal communication skills and lead demos at different roadmap sessions
  • Overall management of the Enterprise Data Lake on Hadoop and AWS environments to ensure that the service delivery is cost effective and business SLAs around uptime, performance and capacity are met
  • Help define guidelines, standards, strategies, security policies and change management policies to support the Data Lake
  • Advise and support project teams (project managers, architects, business analysts, and developers) on tools, technology, and methodology related to the design and development of Data Lake and other Big Data solutions
  • Maintain knowledge of market trends and developments in AWS Big Data related tools, analytics software, and related and emerging technologies like cloud managed services, Agile/DevOps development processes to provide, recommend, and deliver best practice solutions
  • User interaction and requirements gathering from internal customers and consulting them on best practices to effectively use Big Data platforms as a data and computing resource; providing management best recommendations and follow on solutions to support programming maintenance and growth to achieve strategic and operational goals
  • Lead and mentor other team members - and perform code and design reviews
  • Work as part of team in an Agile Software development model
  • Travel: International and Domestic travel up to 5% may be essential

Basic Qualifications:

Doctorate degree and 2 years of Information Systems experience


Master's degree and 4 years of Information Systems experience


Bachelor's degree and 6 years of Information Systems experience


Associate's degree and 10 years of Information Systems experience


High School Diploma / GED and 12 years of Information Systems experience

Preferred Qualifications:
  • 8+ years of experience working on Big Data platforms and Application development
  • Hands-on experience with Big Data applications, Hadoop, AWS and Any Data Management Platforms
  • Experience working with platform and shared service models
  • Experience with GitHub, CVS, SVN (source code control systems)
  • Experience with Hadoop and Map Reduce concepts
  • Working knowledge of Relational Databases and Data Integration tools/technologies
  • Experience in Machine Learning and NLP
  • Experience with Agile methodologies and DevOps
  • Excellent verbal, written and interpersonal communication skills
  • Experience working with Product development is a plus
  • Strong analytical skills for effective problem solving