A virtuous cycle of learning, changing, failing, and improving while breaking down silos with technology
This is for anyone who is considering a career in DevOps engineering. A job that during the last years has become one of the most popular and high in demand and is far from just another trendy term. Actually, according to Glassdoor, DevOps Engineering is among the top 5 jobs in the world.
So, you got your degree in computer science, software engineering, or a related tech discipline, but are you ready to join the ranks? What’s the best next step to take? How do you jump-start your career? Have you considered DevOps for your path? Before I give you some insights into what we actually do, allow me some hard selling for the graduate program we have designed here at Upstream, specially designed for wannabe DevOps.
“Start at Upstream” is our 12-month paid internship program that can give new graduates the professional experience they need to kick off their careers. We are accepting CVs currently – until the 8th of April- and offering a year-long paid placement, that includes a training Bootcamp, and then work on real projects integrated within our teams, your own mentor for a whole year, guiding you in your journey every step of the way and, after the twelve months have ended, the opportunity to continue working with us. Our high-level training covers all aspects of DevOps engineering, concepts and practices you probably never heard of before, including programming, automation, management and monitoring of production systems, and CI/CD.
What DevOps is
DevOps is a software development methodology that combines software development (Dev) with information technology operations (Ops) participating together in the entire service lifecycle, from design through the continuous development process to production support with high quality delivery. DevOps is focused on continual learning, change, and even failure – and encompasses people, practices and tools. DevOps breaks down barriers between organizational functions that have historically sat in silos, including product, engineering, security, quality assurance and operations. DevOps signifies a culture shift within technology, unifying development skills with interpersonal skills. Possessing those skills gives the ability to collaborate with multiple teams and have an overarching view of the business.
Working as a DevOps Engineer at Upstream
Here at Upstream, the DevOps team has a motto. “We try to automate everything so as to become obsolete!” This is genuinely the case – automation is vital for us and our ultimate goal is to reach a point where everything will be working so smoothly, we will no longer be needed! From automating deployments, continuous delivery pipelines and autoscaling configurations, to automating failover infrastructure, we do it all!
In order to achieve this, we use new technologies and methods not only for engineering tasks but operational ones as well. In a rapidly evolving technology landscape, we are constantly seeking and exploring cutting edge technologies in order to ensure the highest quality of deliverables every time, providing our customers with integration of the finest software technologies, tools and frameworks currently available.
We are constantly applying custom engineering tools to make our lives easier, and in particular, since we are talking about the large production footprint and load of Upstream, these tools are there for effectiveness – to get the job done easier, faster and better. So, our job is to help find a path to higher efficiency and greater consistency – ultimately to make our team produce better.
Our work is not just about research and development. We also do a lot of operational work such as providing new system environments for our accounts, meeting specific customer needs, deployments, upgrades or even troubleshooting production environments.
In fact, everything that runs in production passes by us, from the hardware to the software. We provide production architectural design and ensure the optimal operation of the whole infrastructure. Infrastructure refers to the hardware components, such as storage and servers, all network equipment and all the necessary software to operate an IT system, according to the established needs and the “size” of the company system. Alongside this comes the continuous monitoring at all levels (infrastructure, network, application, configuration, third-party, etc.) to create better visibility and maintain system uptime by raising the corresponding alarm when there is a service degradation, outage or any application performance issue.
Another responsibility that we have is to provide tech training to other departments. Training and development have enormous value for employees. Cross-training allows employees to acquire new skills, sharpen existing ones, perform better, increase their productivity and achieve job satisfaction. This helps us ensure that our people find out their potential and perform at their peak.
Let’s dive into some more technical ground now and present some of the main technologies and tools we use, along with brief descriptions:
- VMware virtualization technology as a Hypervisor Architecture. V software creates an abstraction layer over computer hardware that allows the hardware elements of a single computer— processors, memory, storage, and more— to be divided into multiple virtual computers, commonly called virtual machines (VMs).
- Docker is a set of Platform-as-a-Service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.
- Kubernetes (also known as k8s or “kube”) is a production-grade container-orchestration system for automating computer application deployment, scaling, and management of containerized applications across multiple hosts. We can cluster together groups of hosts running Linux containers, and Kubernetes helps us easily and efficiently manage these clusters. Kubernetes clusters can span hosts across on-premise, public, private, or hybrid clouds. For this reason, Kubernetes is an ideal platform for hosting cloud-native applications that require rapid scaling.
We used a PoC production environment in 2016 where this technology was not even mature. Today, we use two vanilla production-grade Kubernetes clusters on-premises, installed with the help of Kubespray on top of vSphere (Kubespray is a composition of Ansible playbooks, inventory, provisioning tools, and domain knowledge for generic OS/Kubernetes clusters configuration management tasks.). We are close to the latest version and, so far, we have more than 1600 development applications / statefulsets apps!
- Cloudera Hadoop – CDH, the world’s most popular Hadoop distribution, is Cloudera’s 100% open-source platform. It includes all the leading Hadoop ecosystem components (HDFS, Yarn, Hive, Spark, Kafka, Hbase, Impala, Zookeeper) to store, process, discover, analyze, model, and serve unlimited data and it’s engineered to meet the highest enterprise standards for stability and reliability. Cloudera has created a functionally advanced system that helps you perform end-to-end Big Data workflows.
Main DevOps Tools
- Jira software from Atlassian, for plan sprints, issue tracking, releases and reports team performance in real-time.
- Gitlab is a web-based DevOps lifecycle tool that provides a Git-repository manager, with built-in version control, issue-tracking and continuous integration and many deployment pipeline features. (Source code management is where development team sharing and collaboration begins)
- Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code. It runs on many Unix-like systems, and can configure both Unix-like systems as well as Microsoft Windows.
- Jenkins is a free and open source automation server. It helps automate the parts of software development related to building, testing, and deploying, facilitating continuous integration and continuous delivery.
- Harbor is an open source trusted cloud native registry project that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity and management.
- Nagios is a free and open-source computer-software application that monitors systems, networks and infrastructure. Nagios offers monitoring and alerting services for servers, switches, applications and services.
- Prometheus is a free software application used for event monitoring and alerting. It records real-time metrics in a time series database built using an HTTP pull model, with flexible queries and real-time alerting.
- Grafana is an open-source platform for monitoring and observability. Grafana allows us to query, visualize, alert on, and understand your metrics no matter where they are stored. It enables us to create, explore, and share beautiful dashboards with our teams and foster a data-driven culture.
- ELK stack is designed to allow users to take data from any source, in any format, and to search, analyze, and visualize that data in real time. “ELK” is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch.
Are you intrigued by the above? Do you have a thirst for knowledge and the desire to pursue a DevOps Engineer role that will equip you with a varied skillset and will boost your career? Join the “ride” with us and work closely with a dynamic, smart, agile and highly motivated team in a competitive and fast paced environment.
Visit www.startatupstream.com to submit your application now!