Your address will show here +12 34 56 78
2020 Blog, Blog, DevOps Blog, Featured, ServiceOne, ServiceNow

Using GIT configuration management integration in Application Development to achieve higher velocity and quality when releasing value-added features and products


ServiceNow offers a fantastic platform for developing applications. All infrastructure, security, application management and scaling etc.is taken up by ServiceNow and the application developers can concentrate on their core competencies within their application domain. However, several challenges are faced by companies that are trying to develop applications on ServiceNow and distribute them to multiple customers. In this article, we take a look at some of the challenges and solutions to those challenges.



A typical ServiceNow customization or application is distributed with several of the following elements:


  • Update Sets
  • Template changes
  • Data Migration
  • Role creation
  • Script changes

Distribution of an application is typically done via an Update Set which captures all the delta changes on top of a well-known baseline. This base-line could be the base version of a specific ServiceNow release (like Orlando or Madrid) plus a specific patch level for that release. To understand the intricacies of distributing an application we have to first understand the concept of a Global application versus a scoped application.


Typically only applications developed by ServiceNow are in the global scope. However before the Application Scoping feature was released, custom applications also resided in the global scope. This means that other applications can read the application data, make API requests, and change the configuration records.


Scoped applications, which are now the default, are uniquely identified along with their associated artifacts with a namespace identifier. No other application can access the data, configuration records, or the API unless specifically allowed by the application administrator.


While distributing applications, it is easy to do so using update sets if the application has a private scope since there are no challenges with global data dependencies.


The second challenge is with customizations done after distributing an application. There are two possible scenarios.


  • An application release has been distributed (let’s call it 1.0).
  • Customer-1 needs customization in the application (say a blue button is to be added in Form-1). Now customer 1 has 1.0 + Blue Button change.
  • Customer-2 needs different customization (say a red button is to be added in Form-1)
  • The application developer has also done some other changes in the application and plans to release the 2.0 version of the application.

Problem-1: If application 2.0 is released and Customer-1 upgrades to that release, they lose the blue-button changes. They have to redo the blue-button change and retest.



Problem-2: If the developer accepts blue button changes into the application and releases 2.0 with blue button changes, when Customer-2 upgrades to 2.0, they have a conflict of their red button change with the blue-button change.



These two problems can be solved by using versioning control using Git. When the application developers want to accept blue button changes into 2.0 release they can use the Git merge feature to merge the commit of Blue button changes from customer-1 repo into their own repo.


When customer-2 needs to upgrade to 2.0 version they use the Stash feature of Git to store their red button changes prior to the upgrade. After the upgrade, they can apply the stashed changes to get the red button changes back into their instance.


The ServiceNow source control integration allows application developers to integrate with a GIT repository to save and manage multiple versions of an application from a non-production instance.


Using the best practices of DevOps and Version Control with Git it is much easier to deliver software applications to multiple customers while dealing with the complexities of customized versions. To know more about ServiceNow application best practices and DevOps feel free to contact: marketing@relevancelab.com


0

2020 Blog, Blog

The industry has witnessed a remarkable evolution in Cloud, Virtualization, and Networking over the last decade. From a futuristic perspective, enterprises should structure the cloud including account creation, management of cloud; and provide VPC (Virtual Private Cloud) scalability and optimize network connectivity model for clouds.


Relevance Lab focuses on optimizing network connectivity model using AWS VPC and services. We are one of the premium partners of AWS and with our extensive experience in cloud engineering including Account Management, AWS VPC and Services, VPC peering and On-premises connectivity. In this blog, we will cover a more significant aspect of connectivity.


As an initial step, we help our customers optimize account management, including policies, billing, structuring the accounts and automate account and policy creations with pre-built templates. Thereby we help leverage AWS Control tower and landing zone.


Distributed Cloud Network and Scalability Challenges


As the AWS cloud Infrastructure and applications are distributed across the globe including North America, South America, Europe, China, Asia Pacific, South Africa, and the Middle East, it brings in much-anticipated scalability issues. Hence, we bring in the best practices while designing the cloud network, including:


  • Allocate VPC networks for each region; make sure this does not overlap with On-premises and across VPC peering.
  • Subnet sizes are permanent and cannot change, leave ample room for future growth.
  • Create EC2 instance growth chart considering your Organization, Customer, and partner’s requirements and their Global distribution. Forecast for the next few years.
  • Adapt to the Route scalability supported by AWS

AWS Backbone provides you private connectivity to AWS services, and hence no traffic leaves the backbone, and it avoids the need to set up NAT gateway to connect to the services from On-premises. This brings in enhanced security and cost-effectiveness.


Amazon S3 and DynamoDB services are the services supported behind Gateway Endpoint, and these are freely available to provision.


VPC Peering is the simplest and secure way to connect between two VPCs, and one VPC cannot transit the traffic for other VPCs. The number of peering requirements is very high among the enterprises due to these design challenges, and it grows as the VPC scales. hence, we suggest the VPC peering connectivity model if the number of VPC’s are limited to 10.


On-premises connectivity models for cloud – Our Approach


The traditional connectivity model which connects site-to-site VPN between VGW (Virtual Private Gateway) and Customer gateway provides lesser bandwidth, less elastic and cannot scale.


A direct connect gateway may be used to aggregate VPC’s across geographies. Relevance Labs recommends introducing a transit gateway between VGW and Direct Connect Gateway. This provides a hub and spoke design for connecting VPCs and on-premises datacentre or enterprise offices. TGW can aggregate thousands of VPCs within a region, and it can work across AWS accounts. If VPCs across multiple geographies are to be connected, TGW can be placed for each region. This option allows you to connect between the on-premises Data Center and up to three TGWs spanning across regions; each of them can aggregate thousands of VPCs.



This model allows connecting to the colocation facility between VPC & Customer Data Center and provides bandwidth up to 40 Gig through link aggregation.


Relevance Lab is evaluating the latest features and helping the Global Customers to migrate to this distributed cloud model which can provide high Scalability and bandwidth along with cost optimization and security.


For more information feel free to contact marketing@relevancelab.com


0

2020 Blog, Blog, Featured

Office Social Distancing Compliance in the New Normal


With the ongoing coronavirus pandemic, businesses around the globe are adjusting their work practices to the new normal. One of the critical requirements is the practice of social distancing with employees being asked to mostly work from home and limit their visits to the office. To comply with newly issued corporate (and even government) guidelines around social distancing, offices around the world are faced with the need to monitor, control, and restrict the number of employees attending office premises on a given day. The thresholds are typically defined as a percentage of maximum office space/floor capacity to ensure sufficient room for social distancing.


Office Attendance Control System


In response to the above client needs, an Office Attendance Control System has been implemented which provides capabilities for request/approval workflows for employees with multi-level approvals, monitoring and reporting dashboards for employee attendance, attendance thresholds, and compliance. Employees can raise requests for self or a group of team members and also multiple date ranges. The system is implemented as a responsive web application and with the email used as the tool to approve/reject requests.


The solution is implemented as a scalable and cloud-based solution that can be deployed rapidly for companies with a local or global presence with offices around the world.


Sample Architecture



Conclusion


The coronavirus has altered “what is normal” (and perhaps permanently in a few cases) which requires companies & solution providers to implement innovative solutions to function effectively under the new circumstances. Relevance Lab’s implementation of “Office Attendance Control Solution” helps companies with global offices to effectively implement social distancing practices in the office environment.


Please reach out to us if interested so that we can help meet your critical business needs.


About Relevance Lab


Relevance Lab is a platform-enabled new-age IT services company driving friction out of IT and Business Operations.


Relevance Lab works with re-usable technology assets & proven expertise in the area of DevOps, Cloud, Automation, Service Delivery and Supply Chain Analytics that help global organizations achieve frictionless business by transforming their traditional infrastructure, applications, and data. In the changing landscape of technologies and consumer preferences, Relevance Lab enables global organizations to


  • Adopt “asset lite” growth models by leveraging cloud (IAAS, PAAS, SAAS)
  • Shift CAPEX to OPEX
  • Automate to improve efficiency and reduce costs
  • Build an end-to-end ecosystem connecting digital products to backend ERP platforms
  • Use agile analytics (our spectra platform) to provide real-time business insights and improve customer experiences

Relevance Lab has invested in a unique IP-based DevOps Automation platform called RL Catalyst. Enterprises can leverage RL Catalyst Command Center and BOTs Engine with existing 100+ BOTS for managing IT Operations, ensuring Business Service Availability through Monitoring, Diagnostics, and Auto-Remediation. Leveraging its critical partnerships with ServiceNow, Chef, Hashicorp, Oracle Cloud, AWS, Azure, GCP, etc., the unique combination of Relevance Lab Platforms and Services offers a hybrid value proposition to large enterprises and technology companies. Our specialization helps the adoption of Cloud Computing, Digital Solutions, and Big Data Analytics to remove friction, increase business velocity, and achieve scale at lower annual budget spends for clients.


Incorporated in 2011 and headquartered in the USA, Relevance Lab has 450+ specialized professionals spread across its offices in India, USA and Canada.


Listed among “Best Companies to Work for in 2019” – Silicon India magazine, May 2019


Listed among “20 most promising DevOps companies in 2019” – CIOReviewIndia magazine, May 2019


0

2020 Blog, Blog, Featured, RLAws Blogs, ServiceNow

Using ServiceNow, AWS Service Catalog and RLCatalyst to create a 1-Click model


AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to manage commonly deployed IT services centrally. It helps you achieve consistent governance and meet your compliance requirements while enabling users to implement only the approved IT services they need quickly.


Working closely with AWS and ServiceNow partnership teams, we have created an integrated solution for enterprises to enable Frictionless User Onboarding and Offboarding in these challenging times of COVID-19. The solution brings together the following building blocks.


Automation:


  • Auto-notification from HR systems for new Employee Onboarding or Offboarding or with Self Service Portals.
  • Workflow Automation in ServiceNow for user-driven or event generated request handling and auto-workflow trigger.
  • Cloud automation with appropriate compliance and policy checks.
  • Orchestration dealing with multiple enterprise systems adapters, complex workflows with integrated approval management based on company policies.
  • Hyper-Automation using a “Service Bus” Model with BOTs across Cloud and Datacenter workloads of Systems and Apps. These cover End User Computing devices (desktops) & Servers with a combination of Windows and Linux workloads.

Integration Service Bus:


  • Integration with Taleo or Workday HR systems that manage the People Management workflows.
  • Integration with Organization Identify and Access Management Tools (Active Directory, SSO, IDAM).
  • Integration with existing ITSM Tools, CMDB/Asset Management and Self Service Portals.
  • Integration with Cloud Infrastructure and Hybrid setups with appropriate policy controls with cost & governance management.
  • Integration with Automated Vulnerability and Patch management lifecycle for all Dynamic Assets.

Intelligent Compliance:


  • Existing SOX processes for assets and resource access controls and compliance.
  • Software Asset Management (SAM) controls as appropriate for the organization (Dynamic Systems and Software CMDB updates).

The following diagram explains the end to end orchestration.



In the sample flow simulated both single-user and bulk user onboarding is supported with an automated multi-stage process that covers Service request creation, AD User provisioning, AWS Workspace provisioning, and notification to end-user post provisioning.


Using RLCatalyst Intelligent automation product the entire solution can be downloaded by customers from a marketplace and enabled in their environments. It is pre-bundled for deployment inside a secure customer environment and includes:


  • A ServiceNow plug-in.
  • An RL BOTs server deployment.
  • AWS Service Catalog integration and BOTs server deployment inside a secure environment of the customer.

For more details, please feel free to reach out to marketing@relevancelab.com

0

2020 Blog, Blog, Digital Blog, Featured

Does this sound like your new operating guidelines?


Rapid changes are happening in the current situation as companies adapt to the new normal and modify their internal processes, people and technology. It is essential to understand the drivers behind this change, to make a smooth transition.



For a customer to start the journey towards this “Virtual IT” driven Frictionless Enterprise the following approach should help:


Step-1: Do a quick assessment on all internal IT interactions to understand the transition from High-Touch models to Touch-Less modes. Using a web/mobile- based self-service portal with a standard catalog of Asset + Service Requests is a good starting point. The Fulfilment of such requests by Automated BOTs helps improve overall experience.


Step-2: With the workforce becoming distributed and using all infrastructure and applications remotely requires a solid assessment of Identity & Access Management architecture. There is a fine balance between flexibility and security vulnerability. Also all remote assets need to have a mature Security, Patch Management & Vulnerability management framework. Having a real-time view on access control and security incidents with modern Security Incident management handling SOP is key.


Step-3: Removing dependence of Physical assets and adoption of Virtual assets especially in Cloud can give a quick jump-start. It is critical to have a dynamic Asset Management and CMDB to get a view on real-time hardware & software assets. Automating User Onboarding and Application Provisioning and similarly deboarding are key to respond with agility without compromising on Governance. The adoption of software driven operations support, virtual assets and proactive SecOps will help deal with the need for speed and security at lower costs.


We are actively changing our internal operations and also helping our customers in faster adoption of the platforms to move towards the same. Our ServiceOne platform, combined with our “touchless” implementation model, helps make this transition in less than four weeks and can jump-start the change. ServiceOne removes friction by adding value to your existing ITSM tool implementation. Or if you don’t have an application, no problem; ServiceOne comes with a de-facto bundled solution for a frictionless, turnkey solution.



There are the immediate benefits to this approach and is powered by the adoption of established Cloud platforms, matured software and stable best practices.



For more information feel free to contact marketing@relevancelab.com

0

2020 Blog, Analytics, Blog, Command blog, Featured

Automation with simple scripts is relatively easy, but complexity creeps in to solve real-world production-grade solutions. A compelling use case was shared with us by our large Financial Asset management customer. They deal with this customer who provides a large number of properties & financial data feeds with multiple data formats coming in different frequencies ranging from daily, weekly, monthly and ad-hoc. The customer business model is driven based on Data processing on these feeds and creating “data-pipelines” for ingestion, cleansing, aggregation, analysis, and decisions from their Enterprise Data Lake.


The current ecosystem of customer comprises multiple ETL Jobs, which connects to various internal, external systems and converts into a Data Lake for further data processing. The complexity was enormous as the volume of data was high and lead to high chances of failures and indeed required continuous human interventions and monitoring of these jobs. Support teams receive a notification through emails when a job is only completed successfully or on failure. Thus, the legacy system makes job monitoring and exception handling quite tricky. The following simple pictorial representation explains a typical daily Data Pipeline and associated challenges:



The legacy solution has multiple custom scripts implemented in Shell, Python, Powershell that would make a call to Azure Data Factory via an API call to run a pipeline. Each independent task had its complexities, and there was a lack of an end to end view with real-time monitoring and error diagnostics.


A new workflow model was developed using the RLCatalyst workflow monitoring component, (using YAML definitions) and the existing customer scripts were converted to RLCatalyst BOTs using a simple migration designer. Once loaded into RLCatalyst Command Centre, the solution provides a real-time and historical view with notifications to support teams for anomaly situations and ability to take auto-remediation steps based on configured rules.


We deployed the entire solution in just three weeks in the customer’s Azure environment along with migrating the existing scripts.



RLCatalyst Workflow Monitoring provides a simple and effective solution much different from the standard RPA tools. RPA deals with more End-User Processing workflows while RLCatalyst Workflow Monitoring is more relevant for Machine Data Processing Workflows and Jobs.


For more information feel free to contact marketing@relevancelab.com


0

2020 Blog, Blog, Featured

The recent COVID-19 virus outbreak has driven many unanticipated changes in the way we do business. In particular, many people who usually go into the office are now required to work from home, which can mean different access methods (e.g. VPN Access) and permissions. These changing conditions have the potential to overwhelm service desks.


We have seen this with our clients in terms of a dramatic increase in service requests. We are pleased with the role our automation has played during these trying times. Our intelligent BOTs have enabled Relevance Lab to respond to upticks in service requests instantly.



Figure 1: Intelligent Automation Eliminated Service Desk Impact

With one of our significant clients with a large NYC footprint, daily inbound tickets increased 92% (Fig.1) during the March Peak Day (vs Feb avg) as the effects of the Corona virus forced people to work-from-home. Fortunately, nearly all those tickets are being managed using RL’s Intelligent BOTs, and we were able to handle the increased volume with no delay. Our Intelligent BOTs handled two and a half times (Figure 2)) their normal daily workload so that our service desk team could maintain focus on other critical business needs.



Figure 2: Dramatic Ticket Spike as People Prepared to Work-from-Home

Over the last year, we have increased the coverage and complexity of our Intelligent Automation to achieve 70-80% inbound ticket automation with an equivalent reduction in human efforts. Having a robust platform with standardized processes, BOTS driven automation and reliable Automation Analytics have helped better prepare for the unknowns.


As we all continue to weather this crisis together, please stay safe. We wish everyone the best.


For more information feel free to contact marketing@relevancelab.com


0

2020 Blog, Blog, Featured

As per multiple market research sources, AIOps market will grow at a CAGR of 33% till 2024. Though the markets attribute the growth factor to the hybrid readiness of the AIOps technologies clubbed with RPA, we at Relevance lab believe that RPA alone may not solve the problem in its totality.  Hence our AIOps platform is driven by Intelligent Automation.


AIOps is gaining traction because of its ability to provide real-time data by eliminating silos, better tracking and management and automate problem-solving. As per one of the private research firms, about 87% of the organizations are creating value by using AIOps platforms.


What business challenges do you solve using an efficient AIOps Platform?


  • Improve Operational Efficiency
  • Reduce Customer Churn

With increasing complexity and dynamism of IT environments, including infrastructure and applications, the ability to provide a holistic platform is critical today. So how do you go about evaluating the right platform?


Problem Identification:

Ability to identify the problem on a timely basis by analyzing the infrastructure and application behavior coupled with Digitization and cloud migration. This rising trend is to drive the revenue growth of the market over the next five years.


Integrations:

Does the platform seamlessly integrate with ITSM, ITOM, Cloud and Automation platforms? One of the most significant challenges seen in enterprises today is the silos. This is where systems, skills, data and infrastructure are ‘owned’ by a specific team or department which works in isolation from other parts of the organization. Any solution being considered must be technology, location, vendor, data and domain agnostic.


Data Normalization:

Ability to collect unstructured and fragmented data to provide actionable insights. Ideally, before you make any decisions on IT solutions, you have analyzed and understood what data you have, where it is stored, who ‘owns’ it and what value it has to the business.


Relevance Lab’s Agile Analytics helps global organizations with solutions that run across the entire organization with speed, responsiveness and flexibility.


Leverage AI-based Technology for RCA:

The introduction of cheap compute resources, the ubiquity of data and the adoption of new technologies such as Artificial Intelligence (AI) all mean that software-based RCA techniques can and should be implemented as a priority. AI, and especially Machine Learning, can process vast amounts of data to detect anomalies in real-time and to predict potential issues and uncover trends.


Real-time & Pro-active:

The introduction of Artificial Intelligence-based technologies has seen the ability to analyze huge amounts of data to detect anomalies in real-time and to perform predictive analysis to prevent service outages. Combined with the ability to automate actions, these technologies allow the organization to move from reactive to proactive.


Flexibility, Openness and Future-ready:

Often IT Solutions that you are discussing are complicated and expensive. Therefore, evaluating a solution which is not; way below the benchmark nor way too above the benchmark or beyond reach is critical. Hence Carefully reviewing the design and architecture of your IT solution to ensure that it is ‘future-proof’ is recommended.


  • Is the product built on proprietary technology that may be difficult to maintain and support in the future?
  • If the solution uses 3rd party products or services, can these be easily swapped out for others?
  • Is the solution ‘open’ with well-defined APIs and integrations?
  • Will the solution be able to perform and scale to your growth plans?

Implementing ITOM, ITSM and AIOps are now the need of the hour wanting to improve service, customer satisfaction levels and the overall operational efficiency by removing silos and leveraging Intelligent Automation.


There is a plethora of choice out there in the market. However, choosing your technology wisely is recommended.


For more information feel free to contact marketing@relevancelab.com


0

2020 Blog, Blog

Probably the probability of redefining Marketing is (Read as numeric 1. P (Redefining Marketing) =1.) “One”. The paradox is; that it is inevitably certain!  It’s high time to re-define Marketing and I have made a humble attempt on it below.


Marketing is as a well-targeted, conversion-oriented, quantifiable, and interactive method of converting a prosumer into a consumer and vice-a-versa and thereby promoting new or existing products or services with the help of innovative technology as an enabler to predict needs, acquire and retain customers. Well easily said, however, it’s a mix of storytelling, data analysis, technology, customer experience design, experimentation’s, systems thinking, and of course brand management, a combination of skill set that may be hard to find.


A marketing scientist should be capable of understanding automation, data and emotions equally well to make it simpler. Well, then will humans really perish as a result of AI, Absolutely no? However, it would definitely force the community, to deviate from their conventional approach and take on a new way of working and life. A fully automated integrated marketing platform should do the following:


  • Gather Data
  • Plan and Automate
  • Increase value

While marketing scientists are capable of working with little or without any data at all with highly intuitive and psychological skills, they gather insights from experimentations like A/B testing to study content and its impact on behaviour. They use these tactics to render content based on dynamic segmentation and obviously, that would be “Segment of One” by all means.


Read Intuitive and Psychological skills as: “The machines may still need a human to do certain things, that it can’t do and therefore the “Future of jobs” may be at the dichotomy of Humanities and Science”. A gap that our educational system may have to rapidly fill in order to avoid urban depressions and suicides. However returning to the marketing scientists which may be the way to go, the scientific Methods used by a Marketing Scientist includes the following:


  • Listening
  • Framing Hypothesis
  • Experimenting and Collecting Data
  • Analysis, Inference and Conclusion:

Listen:

Listen which in an otherwise traditional Market Research terminology is stated as “Observe”. Many marketers do not allocate budget for listening, which imperatively means deploying an AI-based system to listen to the existing and prospective customers on their needs across various channels which may include:


Web to map customer journey and tap behaviour which may include Frequency, Recency, Depth (Interest), Time, Source and thereby arrive at a Purchase Intent scoring. Any transactional data on their respective e-commerce engine would allow the recommendation engine to make the next best offer.


  • Mobile App / Wallets
  • Social Media
  • Email
  • Chat
  • Point of Sale (Includes Physical Store and Electronic Kiosks)
  • IVR
  • USSD

Thereby, breaking the Data Silos and creating a 360-degree customer view or a true “Omni Channel”, which today only exists in the form of presentations, while there are several tall claims.



Frame hypothesis:

Develop a hypothesis which is deeply embedded in the target audience.


Traditionally companies have been attempting at persona development. However, reinstating an earlier said statement in the current context:


The probability of identifying a persona (P (Persona) =1) is one. Having said, it simply means no two personas are identical. Every customer is different and therefore needs to be engaged differently. A simple approach that today’s recruiters or talent analysts will certainly fail on and hence identifying the kind of marketing scientist that one will require will be one of the biggest challenges of today and tomorrow.


Reason to Buy: That’s your story. The story would change from customer to customer, however, the value you offer may not change.

Measure: Deploy processes to measure both in qualitative and quantitative means.



Experimentation and Data Collection:

Experiment on channels, content, segments, spend, pricing and packaging. This experimentation for a marketing scientist is not just limited to the digital means like A/B testing.


Analysis, Inference and Conclusion:

This could be one of the most interesting aspects of a marketing scientist’s job. A few examples of Analysis and methods are mentioned below:


Attribution Modelling – Optimize ad/channel spends based on the conversion goal paths and by assigning weightages to the sources.

Cohort Analysis – Convert Data into dollars by analysing customer groups across a variety of common attributes and create engagements specific to cohorts

Transaction Analysis – Convert visits into conversions by analysing product sales potential and create engagements specific to product groups.

Product Analysis – Identify the strong and weak products and enable engagement through offers, coupons to the audience at a one to one level

Measure and fine-tune conversion goal paths – by reverse goal path Analysis based on the last URL. Timely interventions by means of engagement to avoid path diversion.

Page Analysis and Heat Maps – Identify the page performance to enable optimization

Measure your content for effectiveness – Perform split test or multivariate analysis to arrive at the right content.

Enabling email automation for conversions – The email marketer can publish and track recipients to the website actions and automate response-based email marketing for effectiveness

Sentiment Analysis – Identify the social sentiment of your brand or events across social media. Identify the key influences and trending

Think in probabilities, that’s one of the fundamentals in order to pursue a career as a marketing scientist. For those of you; who skipped your probability classes, continue learning.



About the Author:

Ajeesh is Senior Marketer who has built high-performance teams to drive revenues for new and re-positioned brands across the B2B / B2C segments. He has a blend of the right and the left brain, creative genius, occasionally crazier and yet adamantly saner than the average person.


An educational evangelist and a brand champion he loves studying competitive landscapes and designing product vision and global market strategies.

0

2020 Blog, AIOps Blog, Blog, Featured

With distributed assets across Cloud and non-Cloud environments covering desktops, servers and other devices enterprises are still having a fragmented approach to basic needs of patch management. This brings in unique risks from a Security and Vulnerability perspective. Even when companies do have focus on this area there is a lack of integration between asset management, vulnerability assessment, patch management and governance to ensure a comprehensive solution that leverages “Automation-First” Approach and integrated workflows. This is where RLCatalyst ServiceOne brings in a solution for enterprises to leverage this in a Managed Service Model.


The solution covers all enterprise assets and helps do a discovery, vulnerability assessment and then managing the full-lifecycle of Patch Management. The reason patch management is more complicated since large enterprises commonly have modern and legacy systems covering desktops (Windows, Linux, MacOS), Servers (Redhat, Debian, Ubuntu, CentOS, Windows Servers etc.), Network Devices and others covering assets in data centres and cloud (AWS, Azure, GCP, etc.)


RLCatalyst ServiceOne Solution – Five Layers of Vulnerability & Patch Management of your Infrastructure


The whole process of Intelligence Automation of SecOps starts with the asset inventory to ensure you have complete control and visibility of your Infrastructure. Once this is put in place, the next important aspect would be to run periodic Vulnerability Scans using third party applications like Qualys, AWS Inspector etc. Based on the VA scan report, we need to put an automated patch management solution, post which we can run the SIEM tools which can give a real-time analysis of security alerts. The dashboard or the reports provide a holistic view of the health of your overall Infrastructure from a security standpoint, which the CIOs of any Organizations would be keen to see daily.


ServiceOne Patch Management Solution:


ServiceOne Patch Management Solution is a fully integrated solution with Patching, Backup & recovery. Our solution is integrated with ITSM for the overall management of the solution which can help the organizations run periodic scheduled /unscheduled/ad-hoc scans on the system to identify the missing patches and patch them using an approval process.


The IT team verifies the patches based on the periodic scans and categorise them based on the criticality and bundle them. This can then be pushed to the Application owners who can login to ServiceNow and check the available bundles against their set of servers and approve them or reject them. Once approved, basis the next available scheduled maintenance windows, this can then be automated to schedule a backup of the image of the patching servers and then patch the development servers.


The next step would be an approval process post patching to the app owners to check and confirm the application compatibility and functionality of the patches against their applications.


The app owners in this case has the option to reject the patching in ServiceNow in which case, the image which was taken as backup would be restored back to the development instance and in case of approval, the same would get scheduled automatically for patching during the next maintenance window on the production servers


With RLCatalyst ServiceOne solution we provide enterprises a combination of Consulting, Technology and Integrated Services to take care of end to end patch management needs. Customers can leverage the best of the products in the industry across service orchestration, asset discovery, vulnerability assessment, patch lifecycle management and compliance. Enterprises can get started in less than 4 weeks for onboarding, setup, initial compliance and on-going upgrades. A large global enterprise saved $0.5 Million in the first year of operations as they transitioned 5000+ assets across 10+ data centres & Cloud regions into ServiceOne Integrated Patch Management solution with Relevance Lab Managed services.


For more information feel free to contact marketing@relevancelab.com


0

PREVIOUS POSTSPage 8 of 12NEXT POSTS