opsZero is committed to compliance with HIPAA, PCI and SOC2 through the enforcement of stringent privacy and security policies and procedures. opsZero meets or exceeds the HIPAA, PCI and SOC2 standards in the following ways:
- opsZero has leading technical security standards in place to encrypt and protect data at every point in flight and at rest.
- opsZero undertakes a risk assessment on a regular basis and updates its processes for best practices.
- opsZero has a process for information system monitoring and information security policy and procedure management.
- opsZero has implemented and trains all employees and contractors on a complete set of Policies and Procedures which dictate acceptable work practices for a compliance environment.
- opsZero requires all employees and contractors working with PHI to complete training before accessing sensitive data.
- opsZero takes the appropriate measures and continuously reassesses all policies and procedures to ensure that regulatory requirements are met and that data privacy and security is stringently maintained.
Contacting opsZero at [email protected] for any follow up questions.
To enable the sheepdog of the world we want to focus our energy initially on the one area that could bring the most leverage the rapid creation and deployment of high quality code to solve complex problems. But to see how we fit in we need to first look at a bit of history.
Programming has been going through a transformation from monolithic codebases to simpler services and functions, while the culture of coding has also created best practices and resolved a slew of social issues such as the tabs vs spaces conundrum.
Traditionally software was written as a monolith. The benefit of a monolith include that new code can be added and built up over time and it usually starts encompassing a lot of edge cases that fulfill the business needs. It helps to have a monolith when a company is unsure of its business model and needs to change to fulfill new business cases. Since everything is tied together the code is additive making it easier to add things in the beginning.
Deploying a monolith traditionally has been the most straightforward since there is only a single codebase to deploy. It is easy to just add machines to scale the app. There are not a lot of moving parts but this is also a root of the problem that if something goes wrong it takes out the whole system. You can only for example scale the database so much. If the monolith is huge there could be a million places where something could be breaking and taking down the system and this can result in endless amount of time that it could take to debug and resolve an issue and prevent new deployments from happening. Because of this deployment paradox deploying new features can become a pain.
However, the monolith starts to break apart. The problem with adding more people is that the monolith starts carrying a culture and tradition. The longer people work on the monolith the more they understand “how it’s done.” Coding Style needs to be learned that may not agree with external best practices, the interaction of different code paths get more complicated, and the side effects and edge cases start showing up that need to be resolved.
Monoliths also pose a problem for adding new developers. Every large monolith has its own idiosyncrasies that prevents new developers form quickly building features or coming up to speed. Monolithic make adding additional developers a problem. At a certain point operational and development paralysis takes on as it just becomes impossible to change the codebase without breaking something and this is when service oriented architecture takes over.
Usually there comes a point where the adding of additional code is no longer tenable in a monolith and code is migrated into services. The benefits of services are the code chunks are smaller and more isolated, each service can have their own database and ownership by a product team in a larger business. The downfall of services are API maintenance to ensure that APIs are largely consistent between client and server but this has largely been addressed by tools like GRPC. The services themselves can eventually become large complex monoliths in certain cases. Cultural isolation of code takes over especially at bigger teams. How that piece of code is developed could start diverging from how the rest of the company develops code with different styles, etc.
Deployment becomes a hodgepodge of different issues with scaling, security and maintenance of the different components. Deployment is a total pain from an operational standpoint especially if there are dependent API changes. Docker and Kubernetes are currently solving the issues around deployment of services, but API changes are still a big problem. They give you a consistent way of deploying different codebases.
However, the problem with services and containers now move to a different area. Docker and Kubernetes solve the deployment issue but there are still the following problems: the cost of servers sitting idle waiting for requests, environmental footprint of idle servers, coding consistency and deployments, the many combinations of third party services and databases that could be put to use. It is complex and still needs DevOps which is expensive especially when the goal is get the code out to the customers faster.
Serverless technologies will be the new unit of service in the future. The benefits are numerous to the customer and it will accelerate the development of new services and tools once some of their current issues are resolved.
The first benefit is that there is no packaging. Most of the Serverless services: AWS Lambda, Azure Functions, Google Cloud Functions, Cloudflare Workers all are working to create a common code handling for their functions. They all have their own standard API for requests and they all allow JSON for responses. This simplifies having to learn Docker and the other DevOps tools that provide no benefit to engineers that just want to use their existing tooling without learning new ones.
The second benefit is that security becomes much easier to deal with. AWS has started using Firecracker for running serverless which are high speed VMs as opposed to how Docker’s namespace and cgroup style separation which are more of a quirk of the kernel rather than true isolation of environments. This solves a lot of issues with PCI / HIPAA compliance issues.
There is minimal operation complexity to deploying to serverless. The removal of operational constraint means developers can deploy and not have to worry about certain aspects such as scaling and security and ongoing maintenance of their infrastructure.
The natural course of monoliths is additive they get bigger and bigger however the serverless environments have a limited constrained environment in which to execute code. This is a positive as it leads to code being smaller and more complex chunks being moved into their own smaller functions.
All of this also leads to the following cultural benefits. Smaller code sizes mean that you can develop a consistent coding guideline. It is easier to bring people into the code and get them up to speed. You can hire people and give them the ability to write an isolated piece of code without having to give them access to the whole thing. It creates interchangeability of people and division of labor and getting code faster to customers.
However, there are still some problems that need to be resolved first. The tooling is still evolving and immature but frameworks like Serverless have done a great job creating a simplified framework. Each different provider has their own interface and constraints so this too is evolving. The types of tasks that can be run are still limited since most of them don’t allow long running tasks. Certain constraints on time and environment require reworking how an environment is built.
What are we Solving?
All of this comes to the point now of what it is that we are actually solving. We believe that Serverless technology is the perfect candidate to test the Crowdsourcing of Coding. We think Serverless + Crowdsourcing could lead to code being written much, much faster, reduce the cost of building out teams to the bare minimum, and reduce the operational cost of the Cloud leading to savings while increasing velocity.
Code could be written much faster with Serverless + Crowdsourcing. Individual functions could be scaled to be written by different coders and in parallel so tasks can be accomplished much, much quicker. Linearly scale out the people you need to write code for you without have to hire a lot of people. Think of this as MapReduce for Coding. The Map part is going from idea to code, and the Reduce part is combining the different code parts together. We believe that we can effectively contribute towards the Map part and partner with service integrators that can do the Reduce part. We will do this by standardizing the languages and technologies we use to a minimum and create a standard pipeline for the deployment of code, tests to ensure the quality of the code, and constant feedback to improve the code.
Serverless + Crowdsourcing can also reduce the cost of building out teams include hiring. You can run the operation with a smaller team of integrators who can put the different code together. You can reduce the headcount of teams and expand as you need fulfilling a hybrid organization of core people+scaled team of crowdsourced coders. Smaller teams can move faster so this allows you to increase velocity while reducing headcount.
It reduces the operational expense of running servers. The Capital Expenditure of running you own datacenter was moved to the Operational Expenditure of running in the Cloud. Now with all those servers not running when they are not needed you can reduce the cost of your operational expenditure quite a bit more to when a request happens. A giant side benefit of this is that it is great for the environment because datacenters do emit a significant amount of CO2.
Why This, Why Now?
There has been quite a significant change in the last 10 years in terms of how programming has evolved in regards to tooling. Modern languages developed in the last 10 years such as Go are solving problems that large developer organizations like Google face. These include how to remove the tabs vs spaces argument, how to quickly onboard new engineers by reducing the features of a language, how to make the code readable by having a consistent style enforced by built in language linters like go fmt .
Like the container that ships goods in the physical world standardized all the ports around the world, the container is starting to standardize deployment of code. It is a common way of packaging code from the machine of the developer to the servers where they are run. This has immensely reduced the cost of debugging issues of “it works on my computer” and the problem of having to debug issues across different systems. This will only start to get simpler over time.
Agile has gone mainstream. Agile used to be the domain of startups and being lean was all the rage. However, the Lean Startup movement is now just the Startup movement, most startups are lean. More companies want to do more with less resources and less waste including coding.
For better or worst the sharing economy is here to stay. Economically speaking we are able to do more with vastly less with just in time services at our fingertips. Furthermore, increased cost of living expenses is having people decide where they want to live later on and there is an increase in coworking spaces and a general move out of the tech centers if you are not a 20 something due to the sheer cost or having a family there. Remote work has started to take off with the convergence of Slack, video messaging and general availability of internet access.
What is the key metrics we will track?
Quality of Code - Friction of Creation Code = Customer Happiness
The only metric that matters is quality as we grow. If the quality of the code delivered waivers as we grow then it makes no sense to continue because no one will trust us. Software is a creative endeavor and while we are trying to make it more into a science at the end of the day experience dictates the quality of the code and experience varies widely. To ensure this stays consistent we need to figure out strict protocols that ensure that standards retain high, our vendors are doing their part to improve internal training, and to reduce dependence on people that may manipulate the market. As such the quality control system we build may be the most important in managing interaction of the clients with the vendor and ensuring a positive experience. The only metric to keep track of week by week is the quality of ratings for our Vendors.
The metric we will track to our growth is: 3% Function Growth Day over Day with a 4.5/5 Assignment Rating.
What will we do to scale quickly?
We will be working first to create processes as we start building the company. We will also be using opsZero to build opsZero. Furthermore, tooling needs to be created to quickly to onboard new vendors, qualify and reach out to potential customers, and create a referral engine. We will outsource as much work as possible and use off the shelf tools to do as much work as possible before writing out own.
Before we can scale we need to address a few things which include how do we keep quality high. This means that we need to address how customers behave, how service providers create code, and the interaction between the two.
What are some risks that we may encounter?
If the model we have succeeds I believe that there will be competing marketplaces. Traditional do it all marketplaces like Upwork will likely not compete but there may be alternative companies that may start competing on our turf. This may be additionally be complicated if our quality at any point starts diminishing. If there are additional marketplaces there may be a race to the bottom.
Monitoring serverless architecture is minimal. The environment constraints you concerning the number of processes you can run so there may need to be higher level libraries that plugin to allow monitoring.
Functions need to be small because of the max size of the artifact that you can be deployed. So one of the challenges is to have different functions work together and orchestrating say JSON changes in the requests and responses. There does not seem to be an easy way to do this especially if you have multiple repositories of functions that need to work together.
Artifact Size Reduction
There needs to be a way to minimize packages, remove junk and general code size reduction when deploying functions. Artifact size becomes particularly a problem when including libraries since lots of libraries means an increase in size. So being able to figure out which code is being used and stripping out the unused code could prove to be quite useful.
The Serverless Framework seems to have figured this out. They are currently the primary serverless framework that is used. They have raised some funding and are building out the tooling around deployment quite. They may eventually add orchestration and artifact size reduction and become a Docker for serverless architecture.
However, manual Lambda deployment is pretty janky with IAM roles being odd to setup and no way to coorindate Git with the deployment artifact. The need to manually correlate deployments makes it a bit interesting for rollbacks. Azure seems a bit more mature in this regard its ties Git and Visual Studio to the deployment of the code.
NPM for Serverless
A lot of people don't want vendor lock-in and it can be useful if there is a service like NPM that provides serverless packages that work across different providers. Closest service I found for this is Algorithma.
Serverless + Servers
A lot of services can be augmented with serverless even if they don't go 100% serverless. For example, I have a service in Go which is great for doing certain tasks but for there might be a code path that is a better fit in another language like Python. So the server code can invoke a function in that different language.
There can also be ways to offload certain tasks onto serverless that you don't pollute the main code. For example, sending email can be moved off to a serverless function that can send based on different mailing providers. So if you need to change mail providers you can change the serverless code and not have to worry about changing it in your main code base.
The value of the stack is going toward SOA. A lot of SOA is maintainability and the ease of testing. So a combined SOA and serverless architecture might be the next natural evolution of architecture.
Many companies are using Lambda, but there still needs to be a lot of training around it. Since the model is so new and the tooling not quite mature, this becomes more of a case.
My company Acksin is focusing on environmentalism in computing infrastructure so this is one way that serverless is great in that it is JIT computing. Serverless allows you not to have to run servers and have AWS deal with growing and shrinking your infrastructure. This leads to severe cuts in climate impact since you are no longer dealing with running servers that sit idle and hopefully underneath AWS is making sure the servers are heavily utilized.
What Serverless is Not Good For
Machine Learning Model Delivery
Unfortunately, due to the size constraints of the artifact that can be deployed and the general large sizes of Machine Learning Libraries it doesn't seem that serverless is a good way to deliver trained machine learning models.
Long Running Tasks
Serverless is not good for a long running task since AWS Lambda limits execution time to at max 5 minutes. However, there can be a cascading pattern where function A invokes, invokes functions B, C, D, etc. and split the large task into smaller chunks, however, depending on the number of subtasks this might result in expenses due to warmup times of new containers.
We love Go. It is a great language and allows us to work quickly, but we also acknowledge that it may not be the best language for everything. Since we do a lot of AI and ML type stuff we use quite a bit of Python. Python has some fantastic scientific libraries which help speed up some tasks, and its more dynamic nature makes it easier to work with unstructured data.
So what is one to do? We could deploy multiple applications with an SOA architecture where we have a few applications in Go and few applications in Python. However, this seems a bit of an overkill and increases the complexity of our system. Adding REST versioning between systems or using something like GRPC would have worked but we are also very much about making systems as simple as possible. We actively try to reduce the number of technologies that we use.
One of the tasks we have is processing Acksin push requests. We want to take this data that is coming in and split it and slice it in multiple ways and storing that result for later processing. Ideally, we want the results to be computed as fast as possible so as soon as the Acksin Agent pushes data, we can get it processed. Using a Queue and having two apps would work but then we'd have to worry about scaling our workers if we ever get a significant load.
So we have decided to augment our Server Go code with Serverless invocation. This approach has two benefits. One, we don't need to introduce a Queue. Two, AWS Lambda scales automatically without us having to do anything. Furthermore, since AWS Lambda is just a function we can easily develop new functionality locally and push to Lambda whenever it is ready.
We here at acksin are about two things the environment and rapid innovation. However main hurdle of existing Computing is how much of it is still so manual. We many have tech like puppet and chef for a long time but we are still in a lot of ways running things manually. We are running custom scripts per machine. We are running various tasks related to maintainable manually.
All of this should be moved to code. Don't use the console to start machines at all use tools like tetrasodium or cloud formation. They allow you to build an evolving architecture without having to worry about missing changes. They are quite powerful and allow you to change and deploy your architectures rapidly.
More infrastructure can be this way
Us engineers never really think about climate change when we are working on software or hardware. It may never occur to us to think about it or it may not be readily apparent to us. What we do is so abstract that we really are separated by the way what we do affects the way it affects the global environment.
Further, we are constantly told to move forward. A big part of that is that we sometimes make decisions which might not be in the best interest of the environment. The environment is likely not even a thought when we are making some of the decisions that we make.
Every decision we make, however, is part of a larger body. We are here to create and evolve and affect our environment and we will do that. In future posts we will cover how you can go about addressing some of the things that we list here.
We are optimists here at Acksin. Our mission is to allow organizations to Innovate Fast while being Green. However, sometimes to know that we need to improve we need constructive feedback. So we want to give you the industry our constructive feedback. :)
Most of us are not thinking about all parts of the stack and that is the beauty of modern computing we have decided to focus on what we are all good at whether it be frontend, backend, mobile apps, data science or etc.
However, what we do does affect the world. Each engineer might affect the world slightly but in the end it can affect things in a large way. We hope this will not disuade you from engineering. Engineer is wonderful because it can affect such a large group of people. And we hope by informing you that you can have it in the back of your mind when making significant decisions.
So how does computing affect climate change?
1. Resources to mine and recycle servers
2. Energy and heat created from running software
3. Servers sitting idle and not doing anything
4. Overprovisioned machines
Most servers in the world are not tuned for performance which results is vast amounts of wastage. One study says there are 10 million idle servers in the world or about $30 billion dollars worth. However, when your servers are tuned you spend less on servers, your apps run optimally, and you spend a lot less money. We found no tool to help fix this, so we decided to write one.
Autotune helps you performance tune your Linux boxes. It does this by taking complete information from the system such as CPU, Networking, IO, Memory, Processes, Limits, Disks, Cloud stats, and Containers stats. We then feed it to a Machine Learning engine and a decision-making engine called Mental Models to give you performance recommendations.
Tune your servers.
Many people running servers are doing it badly. I said it. Your infrastructure mostly sucks and full of waste.
It's okay. It's not your fault. With more and more expected from every engineer this kind of knowledge can fall by the wayside. It's just not important for your company in the grand scheme of things. Your company wants to ship product.
However we are fans of systems thinking. We believe that a product is more than just individual components. It is all the components working in tandem. We don't say a liver a body is effective when the liver has gotten enlarged.
So we want you to tune your machines. Tuning machines has multiple benefirs
You save money
You better use your resources
You help the environment.
We are big proponents of the last one. But I'm sure you are big on the first. By tuning machines you don't have to decide.
Most servers aren't tuned for performance. They are suboptimal for many reasons Inc look using historical ones.
The goal them is to make sure that you understand what you get by tuning your machines. You get peak performance on machines because they are optimized to use the full resources of the machine. They are used to make a full change to allow the machines and infrastructure to work together.
Am individual machine no longer matters. Find out how you're entire infrastructure can be adjusted.
When I started opsZero in April 2016, it was a completely different DevOps arena. Kubernetes was beginning to mature, AWS Lambda was still in its infancy, and DevOps was getting started with its involved with Infrastructure as Code using Terraform. I wanted to move DevOps to a world where there were “No Idle Servers” and lo and behold so did other people, including the Cloud Providers.
We are now in a world where the abstraction has moved from the machine to the container. The technology to remove idle servers is primarily the norm and tools, and services like AWS Lambda and Kubernetes are the new abstractions. This new abstraction dramatically diminishes the need for dedicated DevOps and increases the power of the Developer.
The migration to Containers is in full throttle with Kubernetes having won the battle for Container Orchestration and is starting to stabilize. Serverless is beginning to mature at a consistent pace with AWS pushing hard with their Serverless stack. We at opsZero have needed to acknowledge that DevOps is going to become more self-serve for Developers. This self-serve nature is a good thing!
Since the game has changed since opsZero started, we need to change course. opsZero will be playing a new game which creates a significant change to our business model. However, we believe this model will increasing delivery speed, ensure higher quality, reduced variability, provide greater focus, and help us scale. This transition is essential because the model we entered the field with is no longer relevant in 2019.
These are the changes that will happen:
- We will be discontinuing DevOps as a Service and will no longer be doing work that requires long maintenance cycles.
- Our primary focus in migrating organizations to Kubernetes with a focus around HIPAA/SOC2/PCI Compliance and Security.
- Our secondary focus will be setting up repeatable CI/CD Pipelines that include Infrastructure as Code, using Helm and Serverless.
- Our third focus will be to migrate existing codebases directly to AWS Lambda if it makes sense.
We will be discontinuing DevOps as a Service. When we started opsZero in 2016 DevOps as a Service made sense, but as we enter 2019, DevOps has subdivided into SecurityOps, DataOps, and more. However, we are not interested in the complications and one-off natures of some of those tasks as it is not our primary expertise.
The difficulty of scaling DevOps as a Service has become more and more apparent. The lack of repeatability has made it difficult to hire, and it makes it challenging to project deliverables. For example, our communication was becoming client environments x number of clients x members in our team. Each permutation and slight variability of each environment has resulted in knowledge transfer becoming almost impossible without basically the other team member redoing tasks.
This variability has lead to increased stress, and deliverables have become less consistent over time. Using the Theory of Constraints, our work has become variable with limited slack, which means that opsZero has become a bottleneck to our customers. Our quality of work has suffered, and most critically, each additional client has exponentially made our work suffer.
Because of the variability and breadth of work, we were willing to do growth has become entirely stagnant. Growth becomes harder to justify when the quality suffers so much. The lesson we finally learned is that we were trying to be too many things for too many people. What we were doing was not a business, but moonlighting multiple jobs.
A business needs hyperfocus on a few things such that it is a big fish in a small pond. In the book Competitive Strategy, Michael Porter states that a firm can have three different strategies Cost, Focus, and Differentiation. Everyone else is “Stuck in the Middle,” where their competitive strategy is not well defined, and they end up not doing anything well.
To get away from being “Stuck in the Middle,” we had to first decide what it is that we wanted to play.
- We are interested in development that is more ephemeral in nature.
- We want to get people set up on a solid footing, but we are not the best for on-call duties.
- We are also not interested in working outside of Kubernetes, AWS Lambda and Terraform for the most part.
All of these reasons are primary reasons to discontinue DevOps as a Service altogether. The lessons we learned have been hugely beneficial, and in retrospect, it was a mistake to wait this long. The cons of discontinuing DevOps as a Service will be that it will hit our revenue in the immediate term. However, our transition will allow us to focus on the core DevOps values that make us happy and hopefully make our customers happy.
Our new game entails hyperfocus on three specific verticals and creating tooling around those three things to best help address the market. However, before we get to those, I want to get into some personal history about why I got into DevOps.
I started programming at a young age around 12. I remember coming home from school in 8th grade to install Linux 2.4.0 after waiting to download it on a 28k dial-up modem for three days. I installed it on my Slackware Linux machine only to have it completely hose my computer… Well, I kept at it and got better and fell in love with systems, Linux, UNIX, and the open-source community.
However, when I first started working, I was a backend engineer. I worked on software systems, and I loved that, but deployment in teams always sucked. I entered doing DevOps because I wanted to make the lives of backend engineers easier. I tried to make deployment easier so that products could be created faster. I wanted people to deploy code more quickly so that customers could start using them sooner. I wanted people to waste less time deploying code and less time in QA and more time moving rapidly forward.
My true passion is the speed of idea, to code, to customer. That is what opsZero’s Vision will be, “Ideas Deployed Faster.” Since the best way to get code deployed faster if you want to run within your Cloud is Kubernetes and AWS Lambda, those are where opsZero will focus.
Kubernetes is the dominant orchestration tool and the one that is the most generic across all the Cloud Providers. Furthermore, tools like Helm allow creating deployments of common services within the Kubernetes environment. By using Helm, Developers can deploy additional services as part of their code deploys.
Furthermore, Kubernetes fulfills an opportunity to cost savings, repeatability, and future-proofing for companies. As the Cloud vendors take on more and more of the role of addressing the Kubernetes market, there will be legacy apps that will need to be migrated. We will be focused on migrating these apps over to Kubernetes and will work on accelerating the migration to the Container world.
One of the considerations we have is that many companies have stringent compliance requirements that need addressing. So to increase the pool of Kubernetes users, we have released AuditKube, a set of Terraform scripts that allow us to create a consistent Compliance-oriented deployment. The scripts will allow us and others to develop consistent Kubernetes environments across AWS, GCP, and Azure using our partners Foxpass for authentication and LogDNA for Logging.
TL;DR 1. We will migrate legacy apps to run on Docker and Kubernetes 2. We will create a set of repeatable scripts for a compliance-oriented Kubernetes environment.
We find that most companies do not have a good CI Pipelines even in 2019, and we see that being a significant pain to address. However, the significant change in 2019 is that tools like Helm and the Serverless Framework also us to address infrastructure needs. Engineers can add Services to their CI/CD pipeline to test out new services like RabbitMQ, Redis, Elastic, etc. quickly.
As part of helping companies migrate to Kubernetes, we will set up a CI/CD Pipeline that will create repeatable deploys that include:
- Feature branch deploys
- Infrastructure changes as part of the deploys
To ensure that we can do this quickly, we are working on a set of scripts called DeployTag, written by Kyle Barton, that will allow us to set up CI/CD Pipelines with less variability quickly. By reducing variability, we can have more consistency and hence, a better deployment pipeline. We want people to deploy 100 times a day if they desire and be able to create new environments to test out features rapidly.
TL;DR 1. We will create repeatable CI/CD Pipelines for Companies using Helm and Serverless Framework
The future of the Cloud is Serverless. We are still in the early adopter stage of this migration, but the most mature environment so far is AWS Lambda, but Google and Azure will also play in the field. However, we have to pick a horse, and for Serverless specifically, that horse is AWS Lambda.
Many applications benefit from AWS Lambda:
- apps that have intermittent usage
- apps that don’t need to be up all the time
- apps that can scale with usage.
The majority of apps likely fall into this realm and are likely idle, costing money and using energy when they could just be running when invoked. Our environmentalist sensibilities want us to migrate services off these self-run idles computers and into the Cloud.
However, AWS Lambda does have pitfalls, and they will be addressed over time as Amazon invests money and energy. However, we think a majority of applications can migrate to AWS Lambda now and gain all the benefits that it provides.
We will develop our strengths around AWS Lambda and start developing good development practices that will allow us to migrate codebases over effectively. However, we will need to figure out how to effectively do this and reduce the variability that is common with most codebases.
- Long term we will be focused on moving code bases to AWS Lambda.
By focusing on these three verticals, we are setting ourselves to address the market needs and future market growth. By limiting our efforts into fewer things, we can set our customers and ourselves for future success while ensuring high quality, speed, and most importantly reduced variability.
To do these three things well, we will be doubling our efforts into creating processes, setting up automation, building a partner network, and increasing our open source contributions.
We don’t think DevOps will go away, but a majority of tasks will become more specialized over time, and we must understand and exploit our strengths. We have always been interested in getting ideas to the world faster, and we are finally at a stage where we can do that with Kubernetes and Serverless. We are excited to start moving forward and hit work on our next iteration!