Your address will show here +12 34 56 78
2018 Blogs, Blog

Globally, organizations have embraced cloud computing and delivery models for their numerous advantages. Gartner predicts  the public cloud market is expected to grow 21.4 percent by the end of 2018, from $153.5 billion in 2017. Cloud computing services provide an opportunity for organizations to consume specific services with delivery models that are most appropriate for them. They help increase the business velocity and reduce the capital expenditure by converting it into operating expenditure.


Capex to Opex Structure

Capital expenditure refers to the money spent in purchasing hardware and in building and managing in-house IT infrastructure. With cloud computing, it is easy to access the entire storage and network infrastructure from a data center without any in-house infrastructure requirements. Cloud service providers also offer the required hardware infrastructure and resource provisioning as per business requirements. Resources can be consumed according to the need of the hour. Cloud computing also offers flexibility and scalability as per business demands.


All these factors help organizations move from a fixed cost structure for capital expenditure to a variable cost structure for the operating expenditure.


Cost of Assets and IT Service Management

After moving to a variable cost structure, organizations must look at the components of its cost structure. They include the cost of assets and the cost of service or cost of IT service management. Cost of assets show a considerable reduction after moving the entire infrastructure or assets to cloud. The cost of service remains vital as it depends on the day-to-day IT operations and represents the day-after-cloud scenario. The leverage of cloud computing can only be realized if the cost of IT service management is brought down.


Growing ITSM or ITOPs Market and High Stakes

While IT service management (ITSM) has taken a new avatar as IT operations management (ITOM), incident management remains the critical IT support process in every organization. The incident response market is expanding rapidly as more enterprises are moving to cloud every year. According to Markets and Markets, the incident response market is expected to grow to $33.76 billion by the year 2023 from $ 13.38 billion in 2018. The key factors that drive the incident response market are heavy financial losses post incident occurrence, rise in security breaches targeting enterprises and compliance requirements such as the EU’s General Data Protection Regulation (GDPR).


Service fallout or service degradation can impact key business operations. A survey conducted by ITIC indicates that 33 percent of enterprises reported that one hour of downtime could cost them $1 million to more than $5 million.


Cost per IT Ticket: The Least Common Denominator of Cost of ITSM

As organizations have high stakes in ensuring that business services run smooth, IT ops teams have additional responsibility in responding to incidents faster without compromising on the quality of service. The two important metrics for any incident management process are 1) cost per IT ticket and 2) mean time to resolution (MTTR). While cost per ticket impacts the overall operating expenditure, MTTR impacts customer satisfaction. The higher the MTTR, the more time it takes to resolve the tickets and, hence, the lower customer satisfaction.


Cost per ticket is the total monthly operating expenditure of the IT ops team (IT service desk) divided by its monthly ticket volume. According to an HDI study, the average cost per service desk ticket in North America is $15.56. Cost per ticket increases as the ticket gets escalated and moves up the life cycle. For an L3 ticket, the average cost per ticket in North America is about $80-$100+.


Severity vs. Volume of IT Tickets

With our experience in managing the cloud IT ops for our clients, we understand that organizations normally look at the volume and severity of IT tickets. They target the High Severity and High Volume quadrants to reduce the cost of the tickets. However, with our experience, we strongly feel that organizations should start their journey with the low hanging fruits such as the Low Severity tickets, which are repeatable in nature and can be automated using bots.


In the next blog, we will elaborate on this approach that can help organizations in measuring and reducing the cost of IT tickets.


About the Author:

Neeraj Deuskar is the Director and Global Head of Marketing for the Relevance Lab (www.relevancelab.com). Relevance Lab is a DevOps and Automation specialist company- making cloud adoption easy for global enterprises. In his current role, Neeraj is formulating and implementing the global marketing strategy with the key responsibilities of making the brand and the pipeline impact. Prior to his current role, Neeraj managed the global marketing teams for various IT product and services organizations and handled various responsibilities including strategy formulation, product and solutions marketing, demand generation, digital marketing, influencers’ marketing, thought leadership and branding. Neeraj is B.E. in Production Engineering and MBA in Marketing, both from the University of Mumbai, India.


(This blog was originally published in DevOps.com and can be read here: https://devops.com/cost-per-it-ticket-think-beyond-opex-in-your-cloud-journey/ )

0

2018 Blogs, Blog

I recently attended the LogiPharma 2018 conference in the Penn’s Landing area of Philadelphia, Pennsylvania. While overlooking the Delaware river, 250+ Pharma Supply Chain Professionals gathered for a productive and interactive, educational show concentrating on various topics related to how positive the impact of the intelligent, digital supply chain is changing the industry, as well as some of the challenges that face organizations as they assess, develop, implement and deploy these technologies.


An overarching theme to several face-to-face meetings we had with supply chain professionals, prospects, partners and new faces, focused directly on the visibility of end-to-end supply chain, analytics, demand planning, inventory management and cold chain management, which especially resonated with the immunotherapy and clinical manufacturers. In the logistics space, IoT is clearly alive and well with intelligent GPS trackers able to read inventory of an entire warehouse, to devices with palette intelligent information, refrigerated scanning and inventory intelligence as well as virtual reality robots delivering packages, which was fun to try out.


The conference kicked off with Brad Pawlowski, Managing Director at Accenture who heads their supply chain practice. His opening remarks focused on the need for an intelligent model in order to successfully connect directly to the customer, creating a network of thousands upon thousands, in order to capture the information of each and every consumer that has a need in the marketplace. In a sea of multiple end-to-end supply chains, he asked the audience, what assets do you want to have? What visibility do you need to have? How do you implement this successfully? He advised that in order to effectively proceed into the future, organizations must become their own control towers, to become “liquid” organizations, in an effort to be as situational, responsive as possible, turning focus to a customer service-oriented model. He ended, by an impactful thought, that data is no longer a noun, it is a verb, and how we use that verb will determine our ability to stay competitive in a field which is changing and expanding rapidly.


The next two days of speakers, round tables, live polling, panels and networking events, dove into the concerns of minimizing risk, maximizing speed and efficiency, training and development of resources, strengthening collaboration, contingency planning, and technology assessment and use cases. Keynote speaker, Chris Mlynek from AbbVie boasted about their ability to get life-saving Parkinson’s and other medications directly to patients in Puerto Rico after Hurricane Maria in 2017. Seeing a video of the patient without his Parkinson’s medication left quite an impactful memory.


After attending LogiPharma 2018, I am more convinced than ever, that a critical success factor for the future of the Pharmaceutical industry lies in the establishment of a digital supply chain. How can we help you with yours?


About the Author:


Amber Stanton is the Senior Director- Business Development at Relevance Lab. She is responsible for the new business growth of the organization in the US region and maintains C-level relationship with key industry executive. Amber has more than 25 years of industry experience in business development, marketing and account management across various industry verticals such as publishing, financial, healthcare, business intelligence and education.


 #SupplyChainManagement #Analytics #MachineLearning #BigData


0

2018 Blogs, Blog

Mckinsey Global Institute Report of 2018 states that Artificial Intelligence (AI) has the potential to create annual value of $3.5 billion -$5.8 billion across different industry sectors. Today, AI in Finance and IT alone accounts for about $100 billion and hence it is becoming quite the game changer in the IT world.


 With the onset of cloud adoption, the world of IT DevOps has changed dramatically. The focus of IT Ops is changing to an integrated, service-centric approach that maximizes business services availability. AI can help IT Ops in early detection of outages, potential Root Cause prediction, finding systems and nodes which are susceptible to outages, average resolution time and more. This article highlights a few use cases where AI can be integrated with IT Ops, simplifying day-to-day operations and making remediation more robust.


1.)    Predictive analytics of outages:  False positive causes threat alert fatigue for IT Ops teams. The survey indicates that about 52% of security alerts are generally false positives. This puts a lot of pressure on the teams as they have to review each of these alerts manually. In such a scenario, deep neural networks can predict whether an alert will result into outages.



Alerts                                              Layers                                              Yes/No


Feed Forward back propagation with 2 hidden layers should yield good results in terms of predicting outages as illustrated above. All alert types within a stipulated time can act as inputs and outages would be the output. Historical data should be used to train the model. Every enterprise has its own fault line and weakness, and it is only through historical data that latent features are surfaced, hence every enterprise should build their own customized model as “one size fits all” model has a higher likelihood of not delivering expected outcomes.


 The alternate method is a logistic regression where all “alert types” are input variables and “binary outages” would be the output.


Logistic regression measures the relationship between the categorical dependent variables and one or more independent variables by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Thus, it treats the same set of problems as probit regression using similar techniques, with the latter using a cumulative normal distribution curve instead. 


2.)    Root Cause classification and prediction:  This is a two-step process. In the first step, root cause classification is done based on key word search. From free flow Root Cause Analysis fields, Natural Language Processing (NLP) is used to extract key values and classify into predefined root causes. This can be either supervised or unsupervised.


In the second step, Random Forest for multiclass Neural Network can be used to predict root causes while other attributes act as input. Based on the data volume and the datatype, one can choose the right classification model. In general, Random Forest has better accuracy, but it needs structured data and right labeling and it is less fault tolerant to data quality. While Multi-Class Neural Network will need a large volume of data to train, it is more fault tolerant but slightly less accurate.



3.)    Prediction of average time to close a ticket:  A simple weighted average formula can be used to predict time taken for ticket resolution.


Avg time (t) = (a1.T1 + a2.T2+ a3.T3 )/(count of T1+T2+T3)

Where T1 are ticket types.


Other attributes can be used to segment the ticket into right cohorts to make it more predictable. This helps in better resource planning and utilization. Weightage of features can be done heuristically or empirically.


4.)    Unusual Load on System: Simple anomaly detection algorithms can inform whether the system is going through a normal load or it has high variance. A high variance / deviation from average on time series can inform the unusual activities or resources that are not freeing up. However, the algorithm should take care of seasonality as a system load is a function of time and season.


Given the above scenarios it is obvious that AI has a tremendous opportunity to serve IT operations. It can be used for several IT Ops including prediction, event correlation, detection of unusual loads on system (e.g. cyber-attack) and remediation based on root cause analysis.  


About the Author:

Vivek Singh is the Senior Director at Relevance Lab and has around 22 years of IT experience in several large enterprises and startups. He is a data architect, an open source evangelist and a chief contributor of Open Source Data Quality project. He is the author of a novel “The Reverse Journey”.

(The article was originally published in Devops.com and can be read here: https://devops.com/artificial-intelligence-coming-to-the-rescue-of-itops/ 

0

2018 Blogs, Blog

A few days ago, I was traveling from Bangalore to Mumbai. It was an overcast and wet morning and in order to avoid any road traffic delays, I started early so I didn’t have to battle traffic and worry about the prospect of being delayed. At the airport, I checked-in, went through the usual formalities and boarded the flight.  I was anticipating a delay, but to my surprise the flight was on time.  


While we were approaching the main runway, I could see many flights ahead of us in a queue, waiting for their turn to take-off. At the same time, there were two flights which landed within a couple of minutes of each other. The entire environment of the runway and the surroundings looked terribly busy. While our flight was preparing to take off, the Air Traffic Controller (ATC) tower grabbed my attention. That tall structure, looked very calm in the midst of what seemed chaos, orchestrating every move of all the aircrafts, making sure that the ground operations were smooth, error free and efficient in difficult weather conditions.


I started comparing the runway and airport ground operations with that of the complex IT environment in enterprises today, and the challenges it poses to the IT operations teams. Today, critical business services reside on complex IT infrastructure such as on-premise, cloud and hybrid cloud environments. These require security, scalability and continuous monitoring. But do they have the ATC or the Command Center which can orchestrate, monitor all the IT assets and infrastructure for its smooth functioning? For instance, if the payment service of an e-commerce service provider is down for few minutes, it would have to incur significant losses and impact overall business opportunities creating an adverse impact.


Perhaps, today’s IT operations’ team need one such Command Center, just like an ATC at the airport, so that they can fight down-time, eliminate irrelevant noise in operations and provide critical remediation. This Command Center should have the ability to provide a 360 degree view of the health of the IT infrastructure and availability of business services besides providing the topology view of dependent node structure. This could help in assessing the root cause analysis of a particular IT incident or event occurrence. The Command Center should also provide a complete view of all IT assets, aggregated alerts, outage history and past incident occurrence and related communication enabling the IT team to predict the future occurrence of such events or incidents to prevent the outages of critical business services. In case these outages or incidents did occur, it would be a boon for the IT operations team if a Command Center could provide critical data driven insights and suggest remedial actions which in turn could be provisioned with proactive BOTs.


While I arrived at my destination on time- thanks to the ATC which made it possible, despite the challenging and complex weather conditions. This brings me to a critical question that I need to ask -do you have the required ATC or Command Center for IT Operations which can help you sustain, pre-empt and continue with business operations in a complex IT environment?


About the Author:


Neeraj Deuskar is the Director and Global Head of Marketing for the Relevance Lab. Relevance Lab is a DevOps and Automation specialist company- making cloud adoption easy for global enterprises. In his current role, Neeraj is formulating and implementing the global marketing strategy with the key responsibilities of making the brand and the pipeline impact. Prior to his current role, he has managed the global marketing teams for various IT product and services organizations and handled various responsibilities including strategy formulation, product and solutions marketing, demand generation, digital marketing, influencers’ marketing, thought leadership and branding. Neeraj is B.E. in Production Engineering and MBA in Marketing, both from the University of Mumbai, India.


(This blog was originally published in Devops.com and can also be read here: https://devops.com/the-need-for-a-command-center-in-managing-complex-it-operations/ )


0

2018 Blogs, Blog

As per analysts’ forecasts, revenue generated by global B2B players from their online platforms is expected to reach US$6.7 trillion in sales worldwide by 2020. While more & more B2B players are going online, it needs to be noted that B2B models are more complex than B2C models, e.g.:


  • Higher frequency of transactions per customer & order volumes
  • Price fluctuations and longer-term contracts & business relationships
  • Need for multiple user management, levels of approvals/workflows

Specific to the High-Tech distribution industry (comprising of distributors in IT, Telecom, Consumer Electronics, etc.), they are being disrupted with:


  • Shrinking margins in traditional business models
  • New entrants coming with online models
  • Shift in Enterprise Buying Behavior from “Asset” (Capex) to “On-Demand Use” (Opex)
  • Push from OEMs to re-model business – from “offline and volume” to “online and value

These triggers are driving their need for:


  • Wider set of offerings to cater across larger geographies
  • Bundled offerings to re-orient from “transaction” to “lifecycle”, with higher margins
  • Real time visibility with Analytics across the Supply Chain for better efficiencies
  • Self-Service” Portals for better user experience and information on-demand
  • Managing renewals efficiently and timely


Our “RL Catalyst Cloud Marketplace” solution covers their business processes like:


  • Partner/Reseller Registration, On-boarding, Credit limit, Order Placement & Billing
  • End-Customer On-Boarding, Spend Limit Monitoring & Modifications
  • Contracts, Order Fulfilment, Returns, Vendor Bill Reconciliation
  • Catalogue & Content Management
  • Pricing, Discounts, Promotions, Rewards & Loyalty Management
  • User Roles Management, Reports & Dashboards

The value that our solution drives include Increased Revenues, Reduced Cost of Operations, Increased Efficiencies & Faster Time to Market. It also covers the need for “Day After Cloud” management with “RL Catalyst ArcNet Command Center” for providing integrated Cloud Care services, which makes it a differentiated solution for Distributors, CSPs, MSPs and Hosting Providers.


We further believe the context, applicability and value of this solution is also relevant for B2B Distributors in other industries with a multi-layer distribution model like Pharma, FMCG, CPG, Automotive, etc. – where Cloud-driven online business models are driving them towards Digital Transformation.


0

2018 Blogs, Blog

In today’s world where Cloud has become all-pervasive and must-have, the questions around Cloud are moving away from “Why Cloud”, “What are the benefits”, types to “How to move to Cloud”, “How to optimally use Cloud”, etc. In this blog, we are going to talk about how RLCatalyst (DevOps and Automation product from Relevance Lab) can be leveraged for efficient and effective Cloud Migration.


While there is no single universally accepted approach for Cloud migration, organisations and service providers often depend on the goals & objectives, existing landscape & architecture, toolsets and guidelines to achieve cloud adoption. Relevance Lab’s Cloud migration approach advocates a 4-stage structured methodology of DISCOVER →ASSESS → PLAN → MIGRATE (DAPM), involving consulting services and its RLCatalyst product.


image

In the Discover stage, we identify and document the entire inventory planned for the migration – applications, infrastructure and dependencies. Studying the characteristics of each of these will help in the next phase – Assessment. Organisations which maintain the inventory may skip this step and do the assessment directly based on the inventory.


The Assess stage is the crucial step that defines whether a migration candidate qualifies for migration. Some of the factors that need to be determined here are Business considerations (like whether the applications have highly sensitive customer data which requires high security, what are the Disaster Recovery requirements of the applications, etc.) and Technical Considerations (like whether the applications are customized and are integrated with lot of components, whether the applications are tightly bound to on-premise applications, etc.).


In the Plan stage, a migration plan is worked out covering the goals/objectives of migration, identified apps with dependency plans (if any), what tools to use to discover/assess/migrate, target architecture, target cloud (decided based on cost, performance, support etc.), migration strategy (Lift and Shift or Virtualisation & Containerisation, etc.), expected business benefits.


While the above 3 stages are mostly consulting oriented, its in the final stage of Migrate that a tool usage kicks in. While a 100% automated migration is not practically possible, RLCatalyst accelerates migration with some of its in-built capabilities around provisioning of target environment, orchestration & post migration monitoring support, as in:


  1. Infrastructure Automation:  The first step in executing the migration is to setup the target environment as per the outcome of the assessment phase. With RLCatalyst you can do this in 2 phases

    1. Use the base templates available in the product to setup the base infrastructure

    2. Create templates for specific software stacks and applications which can be run on top on basic infrastructure

  2. Ongoing Workload Migration Automation: For the subsequent workload migration post the initial set, you can reuse the RLCatalyst templates and blueprints  

  3. BOTs accelerated Migrations: Use RLCatalyst BOTs to manage applications/services/processes post migration  


Once migration/move to Cloud is complete, the next big focus area for enterprises becomes managing the “Day After” (Post-Migrate). Optimally using the Cloud is key to achieve the overall Cloud Migration program objectives. A “global Cloud pioneer-cum-born in the Cloud company” once told us “DevOps is the right way to manage Cloud”. Along with DevOps, Automation concepts can also be leveraged to use Cloud resources effectively by extending the coverage to IT Service Ops.


RLCatalyst helps Enterprises transition to a “Managed Model” of their existing Data Center and Cloud assets and enables Cloud Monitoring and Optimization.


  • RLCatalyst Command Centre helps in monitoring all your assets (including multi/hybrid Cloud scenarios) and also tracking KPIs like capacity, usage, compliance, costs, etc. more effectively

  • RLCatalyst “Automation First” BOTS provides a pre-built “Automation Library” of assets (BOTS), with which one can quickly achieve standardization, self-service IT, 1-Click deployment, real-time monitoring, proactive remediation capabilities etc. leading to reduced manual efforts in IT services delivery

  • RLCatalyst CI/CD Cockpit helps you seamlessly manage multiple projects across environments, thereby helping you achieve faster time-to-market


0

2017 Blogs, Blog

It’s fast, and keeping pace is not easy


The world of technology today thrives on challenging old paradigms, the emergence of new concepts and rapid adoption of ideas. Artificial Intelligence and BOTs, Automation and Cloud are leading the charge and businesses of all hues and software product and solution providers are jumping on to the bandwagon.


In the midst of these ideas, there’s also the concept of DevOps which has gained ascendancy as a key driver for adoption of BOTs, Automation and Cloud. Businesses realize massive gains vis-a-vis efficiencies in prototyping, go-to-market with a more stable and improved product line and the percentage onspend comes down.


The mindset change


While Cloud technology has been around, organizations have to actively think of beyond “moving to cloud” and understand the various facets of how cloud technology can accrue benefits that are real and measurable. As a forward thinking technology company, Relevance Lab has been at the forefront of leveraging the cloud for business transformation because we understand that it can be the foundation of better software delivery and scalable operations.


For instance, cloud by definition encompasses multiple providers (Public, Private, Hybrid) and it can prove to be a challenge for businesses to manage them well and seamlessly. Relevance Lab provides a platform for taking care of distributed assets such as Orchestration, CMDB, Cost, Capacity Usage, Health and Security Governance.


Utilizing cloud for your IT systems is the beginning of the mindset change. Today, technology needs to be smart, analyze data and perform tasks with minimum manual intervention, streamline operations and reduce the overall SDLC for it to create a significance niche.


The era of convergence – Automation, BOTs, Cloud and DevOps


eMarketer reports that 1.4 billion people interacted with a chatbot in 2015; it’s not surprising. Being multi-functional, Chat BOTs and other forms of BOTs are being used across industries to serve many functions to engage with their customers and enhance their brands. Customer-facing functions in banks for instance use chat BOTs to increase efficiency and reduce the waiting time for customers, thus nudging customer satisfaction and delight towards the positive
node.

BOTs bring in a level of automation that enhances the overall software development, IT operations or other functional cycles wherever deployed. With over 6 years in the DevOps and Cloud space, Relevance Lab has been leading the convergence of ABCD (Automation, BOTs, Cloud and DevOps) with its flagship product RLCatalyst. This DevOps product can help teams to collaborate more meaningfully, enhancing performance, achieving greater scale and streamlining operational functionality.


An effective combination that fuels decisive action


For almost all businesses, technology tools are meant to minimize errors, increase efficiencies and enable strategic decision-making quickly without cost overruns. However, today, most businesses are also grappling with the management of too many systems and software that are proving to be white elephants when it comes to revenue management. It becomes imperative therefore for businesses to go with the automation-first approach which can actually reduce human effort by a whopping 30% and improve quality and shrink delivery times.


With advanced technology solutions like RLCatalyst, with its integrated approach to technology where it offers a DevOps approach to Automation and Cloud, you get value for money. It’s self-aware and auto-remediation features enables and empowers actionable decisioning. It comes equipped with 200+ pre-built BOTs for common Development, Operations and Business Process Automation activities, which in turn further reduces time and effort on developing your own BOTs. You can custom build BOTs with the inbuilt BOTs framework that is suited for your unique needs.


So while we continue to see evolution in the era of ABCD convergence, bringing in greater automation and effective collaboration, the question that begs to be answered, is


how are you managing your enterprise’s automation?


0

2017 Blogs, Blog

In a fast-changing technology and business landscape large enterprises and new business startups are exploring the right solutions for a Cloud led Digital Transformation. We at Relevance Lab (RL) have been on a similar exploratory journey in this dynamic market over last 6 years. With 20+ large enterprise customers, global strategic partners and battle-scars of large scale implementations we are happy to share our experiences.


  • RL is a new generation company focused on enabling adoption of Cloud, DevOps and Automation in the “Right-way” guided by our unique IP, RLCatalyst product
  • Adoption of Cloud is growing rapidly in both enterprises and startups – however to get key benefits from Cloud there is need to think beyond “Moving to Cloud”. The right approach is to “Leverage the Cloud” for business transformation and make that the foundation for better software delivery and scalable operations. RL specializes in this aspect of Cloud adoption
  • Cloud by definition encompasses multiple providers (Public, Private, Hybrid. Hence to manage the distributed assets RLCatalyst is the solution, taking care of Orchestration, CMDB, Cost, Capacity Usage, Health and Security Governance
  • The RLCatalyst product enables an ecosystem to Create, Validate, Deploy, Monitor and Govern an army of “Software BOTs” that drives an Automation-First approach bringing up-to 30% reduction of human efforts with better quality and faster deliveries. We provide 200+ pre-Built BOTs for common Development, Operations and Business Process automation activities.
  • The BOTs that are created with RLCatalyst are Intelligent and Self-Aware – this allows the BOTs to learn from previous outcomes and help achieve Auto-Remediation and Actionable Decisions
  • With a combination of Specialized Services and IP-based Product RLCatalyst, RL is partnering with very large enterprises and startups to move forward on adoption of Cloud, DevOps and Automation with software BOTs for accelerating Digital transformation
  • Also, In the highly competitive service provider market, RLCatalyst can truly catalyze Cloud Service Provider business and accelerate your revenue growth through its unique Cloud Service Brokering capability and billing, invoicing, reconciliation, provisioning, contracting and other similar features.

Get in touch with us for a partnership in this exciting journey to get the right answers.


0

2017 Blogs, Blog

Automation is trending in IT and everyone including CIO and IT administrators’ performance is measured by the quantum of automation achieved in an organization. While there may be differences in measuring the effectiveness and efficiency of IT automation, there is no second guessing on a few key principles that need to be followed while organizations embrace automation in a big way. These principles bring a programmatic structure to IT automation just like the principles help improve the SDLC process.  Let’s look at some of the benefits that can accrue to an organization by way of principles of automation.


image

The key principles are:


  1. Automation code/run books maintained in a Configuration Management System
  2. Separation of data and logic
  3. Compliance driven: trackability and auditability
  4. Integrated automation

Automation code/run books maintained in a Configuration Management System

In a traditional IT organization, it is not that automation has not been attempted or is not prevalent. There have been scripts, codes and snippets that help achieve automation and drive efficiencies, small or large. However, most of this automation is unstructured and unorganized and the entire process of automation faces the danger of dying a slow death in case the initiating team member is replaced or decides to leave the organization.  Additionally, if the automata (any script or entity or engine that does automation) suffer any bug or stop functioning one fine day, it is extremely challenging for the IT team to troubleshoot and make that work again. Fixes that are made again may remain undocumented, potentially, paving way for the same situation to repeat in future. This is where maintaining the code in a Configuration Management System is of value. Such a measure helps maintain a central repository, possibly well-documented, providing access to authorized users and as an added benefit, improving the reusability of the automata across the organization, thereby improving the overall automation efficiency.


Separation of data and logic

One of the best practice that is followed in software development is to segment data and logic so that there is no hard coding of any data and there is no data exposure through the code thereby maintaining data privacy. However, in a traditional IT automation model, this practice is perhaps given a go by, thereby leading to security threats, data exposure, code malfunctioning due to environment change etc. For instance, hard coding a SQL access username and password in automata for performing database automation will make the automata fail whenever the access credentials are changed in the database. Suggested measure is to maintain data that are required for automata to function, in a separate YAML / JSON / XML file (preferably in an encrypted format for sensitive data) and the automation logic is maintained on a separate code. This will lead naturally to parameterisation of code which will in turn enhance the reusability of the automata progressively.


Compliance driven: track-ability and audit-ability

All automata may not always be successful, so there must be room for reversing automata at the time of failures or undesirable results. Hence, it is recommended to maintain a track of the actions that are taken through automata thereby giving room for reversing the changes at a later stage. Tracking can be in the form of a simple syslog entry or Windows event log entry or maintaining a separate log file. Another key factor is to maintain an audit log such that compliance requirements are addressed. IT auditors can scan/read the audit log, gain an understanding of the automated activity and make observations on the level of compliance that activity has adhered to. It is possible to combine these two functionalities and maintain one central log repository addressing these two requirements simultaneously.


Integrated automation

IT automation activities usually tend to have an impact on the end-user of IT services, be it a simple task like password reset or service health check. To ensure that the end-user experience is positive, in most organizations, end-user requirements are interfaced via a Service Desk tool. Given this, it is therefore suggested that a tight integration is maintained between such a tool and the automation engine. This integration will continue to maintain the end-user experience while improving the overall back-end operational efficiency. Taking the password reset scenario again as an example, imagine an end user’s interface not changing for password reset while backend gets changed from manual effort to automated execution via tight integration with automata. Turnaround time that used to be a few hours will be cut down to a few minutes (if not seconds) while not impacting the end user’s front end interface and experience.


0

2016 Blogs, Blog

As an organization, your success is dependent on how you manage the customer experience, either internal or external. The level of support and the turn around time of incident resolution play a pivotal role in the customer either being delighted or being extremely dissatisfied. Most technology firms have a defined Incident Management System that helps track, monitor and resolve the incident as best they can.


Typically L1/ L2 incidents are the primary and secondary line of support that receives requests/ complaints via different channels such as phone, web, email or even chats regarding some technical difficulty that needs attention. Organizations need to have a scalable, reliable and agile system that can manage incidents without loss of time to ensure that normal business operations are not impacted in any way. Even though we’re in the age of Artificial Intelligence and other technological innovations, many organizations continue to employ manual interventions which are a drain on time, effort and effectiveness.


Automation of Incident Management – perhaps the only way forward

Here are 6 good reasons to automate Incident Management:-


a) High Productivity, Low Costs – when manual interventions for repeatable tasks are lessened due to automation, it frees up staff to focus on more strategic and value-added business tasks. It brings costs down as there are less valuable man hours spent on tasks that are now automated.


b) Faster Resolution  – the entire process is streamlined where incidents are captured through self-service portals, or various other incident reporting mechanisms such as mail, phone, chat, etc. Automation allows for better prioritization and routing to the appropriate resolution group, resulting in faster and improved delivery of incident resolution.


c) Minimize Revenue Losses – If service disruptions are not looked into immediately, businesses stand to lose revenue which impacts reputation. Service delivery automation helps to minimize the negative impact of these disruptions, achieves higher customer satisfaction by restoring services quickly without impacting business stability. 


d) Better Collaboration – automation leads to better collaboration between business functions. Because of the bi-directional communication flow, detection, diagnosis, repair and recovery are achieved in record time due to the adoption of automation. Incident management becomes easier, faster and records improved performance due to clearly defined tasks, more collaboration due to increased communication. 


e) Effective Planning & Prioritization – Incidents can be prioritized based on the severity of the issue and can be automatically assigned or escalated with complete information to the appropriate task force. This helps in better scheduling and planning of overall incident management leading to a more efficient and effective management of issues. 


f) Incident Analysis For Overall Improvement of Infrastructure – with an analytics dashboard that you can call up periodically, you can assess the problem areas basis frequency, time, function, etc. This can lead to better-planned expenditure on future infrastructure investments.


Service delivery automation is the practical and sensible approach for organizations if they want to reduce human error, reduce valuable man-hours performing repetitive tasks and improve the quality of delivery significantly.


RLCatlayst is integrated with ServiceNow,  and helps in ticket resolution automation through its powerful BOTs framework. With RLCatalyst you can automate the provisioning of your infrastructure, keep a catalog of common software stacks, exercise your options on what to use, based on what’s available and improve the overall quality of service delivery. RLCatalyst is an intelligent software which can study discernible patterns and identify frequently logged tickets by users, thus enabling faster incident resolution.


RLCatalyst comes with a library of over 200 BOTS that you can use to customize based on your needs and requirements. With RLCatalyst you get the dual benefit of DevOps automation with service delivery automation, taking your organization to the next level of efficiency and productivity.


0

PREVIOUS POSTSPage 10 of 12NEXT POSTS