The job below is no longer available.

You might also like

in Denver, CO

Use left and right arrow keys to navigate
Hours Full-time
Location Denver,CO,80202
Denver, Colorado

About this job

Sling TV L.L.C. provides an over-the-top television experience on TVs, tablets, gaming consoles, computers, smartphones and other streaming devices. Distributed across a variety of strategic device partners, including Google, Amazon, Apple TV, Microsoft, T-Mobile, Sprint, Roku, Samsung, Netflix, and many others, Sling TV offers two primary domestic streaming services that collectively include more than 100 channels of top content. Featured programmers include Disney/ESPN, Fox, ABC, NBC, HBO, AMC, A&E, EPIX, Cinemax, Starz, NFL Network, NFL Networks, NBA TV, NHL Networks, Pac-12 Networks, Hallmark, Viacom, Univision, and more. For spanish-speaker customers, Sling Latino offers a suite of standalone and extra Spanish-programming packages tailored to the U S. Hispanic market.  And for those seeking International content, Sling International currently provides more than 300 channels in 20 languages (available across multiple devices) to U.S. households. 

 

Sling TV is the #1 Live TV Streaming Service 

(Based on the number of OTT households as reported by comScore as of April 2017)

 

Sling TV is a next-generation service that meets the entertainment needs of today’s contemporary viewers. Visit .

A successful Big Data Infrastructure Engineer will have:

 

  • A 4-year college degree in Computer Science / Information Technology, Bachelor of Science preferred, master’s degree is nice to have
  • 3+ years of professional enterprise development experience
  • Experience with one or more of the following:
    • Operating a large-scale, high-uptime Kafka, Elasticsearch and/or Cassandra clusters in an on premise, public or hybrid cloud environments.  Scaling up with zero-downtime big data technologies such as Kafka, Elasticsearch and Cassandra.
    • Working knowledge of building Apache Spark and Hadoop clusters
    • Strong background in Linux systems, including shell scripting and performance tuning.  Excellent understanding of Internet technologies and protocols (TCP/IP, DNS, HTTP, SSL, etc).  Aware of general monitoring principles and tools
    • Coding skills in Python, PHP, or another interpreted language like Perl or Ruby
    • Deployment and maintenance of Spring Boot Java back-end applications
    • Configuration management tools like SaltStack, Puppet, Ansible or Chef.  Experience with Kubernetes, Docker and or Jenkins

#LI-SLING2

Big Data Infrastructure Engineer

 

Are you a driven, hands on technology generalist with a strong bias to action, excellent analytical skills, and a taste for good engineering? Are you currently working in high scale, zero downtime Kafka, ElasticSearch, Cassandra and or Spark Environment?  Are you looking for new complex challenges that vary daily?  If so, we are looking for a new member to join our team here in American Fork, UT or Englewood, CO to help us push our systems, processes & culture to the next level.  Come be a part of changing the face of TV!

 

The BigData & Analytics team is responsible for reliably ingesting high-volume streams of data from multiple systems, enriching that data, processing it and providing real-time queries, visualizations, insights, alerts, and other features from it. As SlingTV Platform grows in functionality and complexity, the BigData & Analytics’ team is responsible for providing services, infrastructure for visualization and machine learning over the different streams of data generated by the multiple components of the platform. To maintain and manage these data streams, we have developed a scalable platform that relies on Kafka, Elasticsearch, Cassandra and Greenplum Data warehouse. We are also in the initial stages of building an Apache Hadoop cluster for new machine learning processes running Apache Spark.

 

Primary responsibilities and skills include the following areas:

 

  • Having a good understanding of the different pieces of the SlingTV Analytics stack
  • Having the knowledge and tools to fully operate our big data technology stack
  • Working with minimal guidance on daily operationalization of the systems and development
  • Working side-by-side with the DevOps team to fully automatize a new feature deployment
  • Owning our CI/CD pipeline and scripting technologies part of our stack
  • Owning the tooling for our key architectural components