Consider the following scenario: You are a developer and you are “in the zone”. Your entire being is working on finalize that piece of code before the last ferry departs. Suddenly your thread of thought is broken by the unpleasant sound of the phone. An application is acting up and your full attention is requested to find out the culprit. Time for some real-time troubleshooting. After all, we love challenges and we thrive under pressure- but not exactly when your mind was already engaged in crafting that elusive solution in code.
We started the troubleshooting activity that encompasses activities such as browsing different flavors of dashboards and admin consoles, jumping from server to server, connecting discrete set of events, probing multiple components, checking software configuration changes, user tickets, keeping an eye on relevant news and social media, conferencing business users, engineers and operation managers from all over the organization; all and more items have to be analyzed to make sense of the issue at hand to identify an actionable item. The last ferry is long gone, and the stakes are high.
Did somebody sound the alarm from the flood of emails that are sent every day by the monitoring tools to developers, product owners, sysadmins, network admins, database admins, security admins, and everyone interested on “following” the product? Hundreds or even thousands of them per day are generated in a business as usual shift per product. Who has the time to read emails when you are “in the zone”? Most likely buried in a sea of email folders to be read.
Adhering to the Reactive Manifesto, new capabilities facing consumers are developed by teams that embed Artificial Intelligence algorithms to transform activities that thrive with an extraordinary amount of data where humans struggle to keep up, either to simplify the user interaction, mitigate error prone actions or reduce the impact of financial markets when operating massive trades.
Having the agile manifesto in mind and treating the Infrastructure as code, we could implement a user story where the DevOps team member supporting a portfolio of products would like to be involved only when the Artificial Intelligence algorithms perform real-time analysis and diverting outliers as events of special interest. Thus, we can reduce the time to market of the product under development and minimize the expensive human context switching that is incurred. The algorithms can analyze real-time the streams of data generated by different relevant infrastructure sensors, either using Supervised, Unsupervised or Deep Learning Algorithms.
Algorithmic IT Operations (AIOps) and DevOps practices can produce a very reactive collaboration that can blend the organization efforts to adapt to a changing environment where being responsive, resilient and scalable is a must, sparing human brain cycles for the critical situations that required our full attention. When can we start the feature mapping?
DevOps 2.0 includes all groups that contribute to delivering working, elastic, secure software and infrastructure with predictable costs.
Patrick Debois coined the term DevOps in 2009. He took his inspiration from a presentation by John Allspaw titled 10+ Deploys Per Day: Dev and Ops Cooperation.
Mainframe modernization offers opportunities to clear roadblocks and re-engineer legacy processes, allowing organizations to keep up with the demands of the digital economy.
Will mainframe systems begin to show up as significant business risks on auditors’ reports? If they haven’t been modernized, yes. In fact, it’s already happening.
Get in touch with our experts and find out how Astadia's range of tools and experience can support your team.contact us now