Your address will show here +12 34 56 78
2023 Blog, Blog, DevOps Blog, Featured

In today’s complex regulatory landscape, organizations across industries are required to comply with various regulations, including the Sarbanes-Oxley Act (SOX). SOX compliance ensures accountability and transparency in financial reporting, protecting investors and the integrity of the financial markets. However, manual compliance processes can be time-consuming, error-prone, and costly.

Relevance Lab’s RLCatalyst and RPA solutions provides a comprehensive suite of automation capabilities that can streamline and simplify the SOX compliance process. Organizations can achieve better quality, velocity, and ROI tracking, while saving significant time and effort.

SOX Compliance Dependencies on User Onboarding & Offboarding
Given the current situation while many employees are working from home or remote areas, there is an increased challenge of managing resources or time. Being relevant to the topic, on user provisioning, there are risks like, identification of unauthorized access to the system for individual users based on the roles or responsibility.

Most organization follow a defined process in user provisioning like, sending a user access request with relevant details including:

  • Username
  • User Type
  • Application
  • Roles
  • Obtaining line manager approval
  • Application owner approval

Based on the policy requirement and finally the IT providing an access grant. Several organizations have been still following a manual process, thereby causing a security risk.

In such a situation automation plays an important role. Automation has helped in reduction of manual work, labor cost, dependency/reliance of resource and time management. An automation process built with proper design, tools, and security reduces the risk of material misstatement, unauthorized access, fraudulent activity, and time management. Usage of ServiceNow has also helped in tracking and archiving of evidence (evidence repository) essential for Compliance. Effective Compliance results in better business performance.

RPA Solutions for SOX Compliance
Robotic process automation (RPA) is quickly becoming a requirement in every industry looking to eliminate repetitive, manual work through automation and behavior mimicry. This will reduce the company’s use of resources, save money and time, and improve the accuracy and standard of work being done. Many businesses are currently not taking use of the potential of deploying RPAs in the IT Compliance Process due to barriers including lack of knowledge, the absence of a standardized methodology, or carrying out these operations in a conventional manner.

Below are the areas which we need to focus on:

  • Standardization of Process: There are chances to standardize SOX compliance techniques, frameworks, controls, and processes even though every organization is diverse and uses different technology and processes. Around 30% of the environment in a typical organization may be deemed high-risk, whereas the remaining 70% is medium- to low-risk. To improve the efficiency of the compliance process, a large portion of the paperwork, testing, and reporting related to that 70 percent can be standardized. This would make it possible to concentrate more resources on high-risk locations.
  • Automation & Analytics: Opportunities to add robotic process automation (RPA), continuous control monitoring, analytics, and other technology grow as compliance processes become more mainstream. These prospective SOX automation technologies not only have the potential to increase productivity and save costs, but they also offer a new viewpoint on the compliance process by allowing businesses to gain insights from the data.


How Automation Can Reduce Compliance Costs?


  • Shortening the duration and effort needed to complete SOX compliance requirements: Many of the time-consuming and repetitive SOX compliance procedures, including data collection, reconciliation, and reporting, can be automated. This can free up your team to focus on more strategic and value-added activities.
  • Enhancing the precision and completeness of data related to SOX compliance: Automation can aid in enhancing the precision and thoroughness of SOX compliance data by lowering the possibility of human error. Automation can also aid in ensuring that information regarding SOX compliance is gathered and examined in a timely and consistent manner.
  • Recognizing and addressing SOX compliance concerns faster: By giving you access to real-time information about your organization’s controls and procedures, automation can help you detect and address SOX compliance concerns more rapidly. By doing this, you can prevent expensive and disruptive compliance failures.

Automating SOX Compliance using RLCatalyst:
Relevance Lab’s RLCatalyst platform provides a comprehensive suite of automation capabilities that can streamline and simplify the SOX compliance process. By leveraging RLCatalyst, organizations can achieve better quality, velocity, and ROI tracking, while saving significant time and effort.



  • Continuous Monitoring: RLCatalyst enables continuous monitoring of controls, ensuring that any deviations or non-compliance issues are identified in real-time. This proactive approach helps organizations stay ahead of potential compliance risks and take immediate corrective actions.
  • Documentation and Evidence Management: RLCatalyst’s automation capabilities facilitate the seamless documentation and management of evidence required for SOX compliance. This includes capturing screenshots, logs, and other relevant data, ensuring a clear audit trail for compliance purposes.
  • Workflow Automation: RLCatalyst’s workflow automation capabilities enable organizations to automate and streamline the entire compliance process, from control testing to remediation. This eliminates manual errors and ensures consistent adherence to compliance requirements.
  • Reporting and Analytics: RLCatalyst provides powerful reporting and analytics features that enable organizations to gain valuable insights into their compliance status. Customizable dashboards, real-time analytics, and automated reporting help stakeholders make data-driven decisions and meet compliance obligations more effectively.

Example – User Access Management


Risk Control Manual Automation
Unauthorized users are granted access to applicable logical access layers. Key financial data/programs are intentionally or unintentionally modified. New and modified user access to the software is approved by authorized approval as per the company IT policy. All access is appropriately provisioned. Access to the system is provided manually by IT team based on the approval given as per the IT policy and roles and responsibility requested.

SOD (Segregation Of Duties) check is performed manually by Process Owner/ Application owners as per the IT Policy.
Access to the system is provided automatically by use of auto-provisioning script designed as per the company IT policy.

BOT checks for SOD role conflict and provides the information to the Process Owner/Application owners as per the policy.

Once the approver rejects the approval request, no access is provided by BOT to the user in the system and audit logs are maintained for Compliance purpose.
Unauthorized users are granted privileged rights. Key financial data/programs are intentionally or unintentionally modified. Privileged access, including administrator accounts and superuser accounts, are appropriately restricted from accessing the software. Access to the system is provided manually by the IT team based on the given approval as per the IT policy.

Manual validation check and approval to be provided by Process Owner/ Application owners on restricted access to the system as per IT company policy.
Access to the system is provided automatically by use of auto-provisioning script designed as per the company IT policy.

Once the approver rejects the approval request, no access is provided by BOT to the user in the system and audit logs are maintained for Compliance purpose.

BOT can limit the count and time restriction of access to the system based on the configuration.
Unauthorized users are granted access to applicable logical access layers. Key financial data/programs are intentionally or unintentionally modified. Access requests to the application are properly reviewed and authorized by management User Access reports need to be extracted manually for access review by use of tools or help of IT.

Review comments need to be provided to IT for de-provisioning of access.
BOT can help the reviewer to extract the system generated report on the user.

BOT can help to compare active user listing with HR termination listing to identify terminated user.

BOT can be configured to de-provision access of user identified in the review report on unauthorized access.
Unauthorized users are granted access to applicable logical access layers if not timely removed. Terminated application user access rights are removed on a timely basis. System access is de-activated manually by IT team based on the approval provided as per the IT policy. System access can be deactivated by use of auto-provisioning script designed as per the company IT policy.

BOT can be configured to check the termination date of the user and de-active system access if SSO is enabled.

BOT can be configured to deactivate user access to the system based on approval.

The table provides a detailed comparison of the manual and automated approach. Automation can bring in 40-50% cost, reliability, and efficiency gains.

Conclusion
SOX compliance is a critical aspect of ensuring the integrity and transparency of financial reporting. By leveraging automation using RLCatalyst and RPA solutions from Relevance Lab, organizations can streamline their SOX compliance processes, reduce manual effort, and mitigate compliance risks. The combination of RLCatalyst’s automation capabilities and RPA solutions provides a comprehensive approach to achieving SOX compliance more efficiently and cost-effectively. The blog was enhanced using our own GenAI Bot to assist in creation.

For more details or enquires, please write to marketing@relevancelab.com

References
What is Compliance as Code?
What is SOX Compliance? 2023 Requirements, Controls and More
Building Bot Boundaries: RPA Controls in SOX Systems
Get Started with Building Your Automation Factory for Cloud
Compliance Requirements for Enterprise Automation (uipath.com)
Automating Compliance Audits|Automation Anywhere



0

2023 Blog, Blog, BOTs Blog, DevOps Blog, Featured

With growing interest & investments in new concepts like Automation and Artificial Intelligence, the common dilemma for enterprises is how to scale these for significant impacts to their relevant context. It is easy to do a small proof of concept but much harder to make broader impacts across the landscape of Hybrid Infrastructure, Applications and Service Delivery models. Even more complex is Organizational Change Management for underlying processes, culture and “Way of Working”. There is no “Silver bullet” or “cookie-cutter” approach that can give radical changes but it requires an investment in a roadmap of changes across People, Process and Technology.


Relevance Lab has been working closely with leading enterprises from different verticals of Digital Learning, Health Sciences & Financial Asset Management on creating a common “Open Platform” that helps bring Automation-First approach and a maturity model to incrementally make Automation more “Intelligent”.



Relevance Lab offers RLCatalyst – An AIOps platform driven by Intelligent Automation paves way for a faster and seamless Digital Transformation Journey. RLCatalyst Product is focused on driving “Intelligent” AUTOMATION.


AUTOMATION is the core functionality including:
  • DevOps Automation targeting Developer & Operations use cases
  • TechOps Automation targeting IT Support & Operations use cases
  • ServiceOps Automation targeting ServiceDesk & Operations use cases
  • SecOps Automation targeting Security, Compliance & Operations use cases
  • BusinessOps Automation targeting RPA, Applications/Data & Operations use cases)

Driving Automation to be more effective and efficient with “Intelligence” is the key goal and driven by a maturity model.
“Intelligence” based Maturity model for Automation
Level-1: Automation of tasks normally assisting users
Level-2: Integrated Automation focused on Process & Workflows replacing humans
Level-3: Automation leveraging existing Data & Context to drive decisions in more complex processes leveraging Analytics
Level-4: Autonomous & Cognitive techniques using Artificial Intelligence for Automation



RLCatalyst Building Blocks for AIOps

AIOps Platforms need to have common building blocks for “OBSERVE – ENGAGE – ACT” functionality. As enterprises expand their Automation coverage across DevOps, TechOps, ServiceOps, SecurityOps, BusinessOps there is need for all three stages to Observe (with Sensors), Engage (Workflows), Act (Automation & Remediation).


RLCatalyst provides solutions for enterprises to create their version of an Open Architecture based AIOps Platform that can integrate with their existing landscape and provide a roadmap for maturity.


  • RLCatalyst Command Centre “Integrates” with different monitoring solutions to create an Observe capability
  • RLCatalyst ServiceOne “Integrates” with ITSM solutions (ServiceNow and Freshdesk) for the Engage functionality
  • RLCatalyst BOTS Engine “Provides” a mature solution to “Design, Run, Orchestrate & Insights” for Act functionality


For more information feel free to contact marketing@relevancelab.com


0

2022 Blogs, Blog, DevOps Blog, Featured

Automated deployment of software makes the process faster, easier, repeatable, and more supportable. A variety of technologies are available for deployment, but you need not necessarily choose a complex automation approach to reap the benefits. In this blog, we will cover how Relevance Lab approached using automation for the deployment of their RLCatalyst Research Gateway solution.

RLCatalyst Research Gateway solution from Relevance Lab provides a next-generation cloud-based platform for collaborative scientific research on AWS with access to research tools, data sets, processing pipelines, and analytics workbenches in a frictionless manner. The solution can be used in the Software as a Service (SaaS) mode, or it can be deployed in customers’ accounts in the enterprise mode. It takes less than 30 minutes to launch a working environment for Principal Investigators and Researchers with security, scalability, and cost governance.


During the deployment of this solution, several AWS resources are created:

  • Networking (VPC, Public and private subnets, Internet and NAT Gateways, ALB)
  • Security (Security Groups, Cognito Userpool for authentication, Identity and Acess Management (IAM) Roles and Policies)
  • Database (AWS DocumentDB cluster)
  • EC2 Compute
  • EC2 Image Builder pipelines
  • S3 Buckets (storage)
  • AWS Service Catalog products and portfolios

When such a variety of resources are to be created, there are several benefits of automating the deployment.

  • Faster Deployment: It takes an engineer at least a few hours to deploy all the resources manually, assuming everything works to plan. If errors are encountered, it takes longer. With an automated deployment, the process is much quicker, and it can be done in 15-30 minutes.
  • Easier: The deployment automation encapsulates and hides a lot of the complexity of the process, and the engineer performing the task does not need to know a lot of the different technologies in depth. Also, since the automation has been hardened over time through repeated testing in the lab, much of the error handling has been codified within the scripts.
  • Repeatable: The deployment done via automation always comes out exactly as designed. Unlike manual deployment, where unforced user errors can creep in, the scripts perform each run exactly the same. Also, scripts can be coded to fix broken installs or redeploy solution software.
  • Supportable: Automation scripts can have logging, which makes it easy for support personnel to help in case things don’t go as planned.

There are many technologies that can help automate the deployment of software. These include tools like Chef and Ansible, language-specific package managers like PyPI or npm, and Infrastructure as Code (IaC) tools like CloudFormation or Terraform. For RLCatalyst Research Gateway, which is built on AWS, we picked CloudFormation Templates (CFT) for our IaC needs in combination with plain old shell scripts. Find our deployment scripts on Github.


  • Pre-requisites: We deploy Research Gateway in a standard Virtual Private Cloud (VPC) architecture with both public and private subnets. This VPC can be created using a quickstart available from AWS itself.
  • Infrastructure: The infrastructure is created as five different stacks.
    • Amazon S3 bucket: This is used to hold all the deployment artifacts like CFT templates.
    • AWS Cognito UserPool: This is used for authentication.
    • AWS DocumentDB: This is used to store all persistent data required by Research Gateway.
    • Amazon EC2 Image Builder: Pipelines are created to rebuild Amazon Machine Image (AMI) for the standard catalog items that are AMI-based. This ensures that the AMIs have the latest patches and security fixes.
    • Amazon EC2 (main stack): This hosts the Research Gateway portal.
  • Configuration: Some of the instance-specific data is part of the configuration, which is stored in one of the following ways.
    • Files: Configuration files are created during the deployment process, using data provided at the time. These files are referred by the solution software to customize its behavior. File-based configurations are easier to access for support personnel and can be easily checked in case the solution software is not behaving as expected.
    • Database Entries: A configs collection in the database hosts some of the information. Ideally, all configurations can reside in the database, but because the database is encrypted and has restricted access, we prefer to keep some of the configurations outside the DB.
    • AWS Systems Manager (SSM) Parameter Store: Some configurations, especially those related to AMIs, which are resolved by CFTs at run-time, are maintained in the AWS SSM Parameter store.
  • Research Gateway Solution Software: Distributed as docker images via AWS Elastic Container Registry (ECR). This allows us to distribute the solution software privately to the customers’ AWS accounts. Our solution software runs as a set of docker services. A variation of the deployment script can also deploy this as services into AWS Elastic Kubernetes Service.
  • Load-balancing: The EC2 instances deployed register themselves with Target Groups, and an Application Load Balancer serves the application securely over SSL using certificates hosted in AWS Certificate Manager.

Once the solution software is deployed, and the portal is running and reachable, the first user (an Admin role) is created using a script. Using that Administrator user credentials, the rest of the onboarding process can be completed by the customer from the UI.

Summary
Using the automated deployment process, an instance of the RLCatalyst Research Gateway can be provisioned and configured in less than 30 minutes. This allows customers to start using the solution quickly and derive maximum benefits from their investment with minimum effort.

If you would like to launch your scientific research environment in less than 30 minutes with RLCatalyst Research Gateway or would like to learn more about it, write to us at marketing@relevancelab.com.

References
Architecting a Cloud-based Application with AWS Best Practices
Enabling Frictionless Scientific Research in the Cloud with a 30 Minutes Countdown Now!



0

2020 Blog, Blog, DevOps Blog, Featured, ServiceOne, ServiceNow

Using GIT configuration management integration in Application Development to achieve higher velocity and quality when releasing value-added features and products


ServiceNow offers a fantastic platform for developing applications. All infrastructure, security, application management and scaling etc.is taken up by ServiceNow and the application developers can concentrate on their core competencies within their application domain. However, several challenges are faced by companies that are trying to develop applications on ServiceNow and distribute them to multiple customers. In this article, we take a look at some of the challenges and solutions to those challenges.



A typical ServiceNow customization or application is distributed with several of the following elements:


  • Update Sets
  • Template changes
  • Data Migration
  • Role creation
  • Script changes

Distribution of an application is typically done via an Update Set which captures all the delta changes on top of a well-known baseline. This base-line could be the base version of a specific ServiceNow release (like Orlando or Madrid) plus a specific patch level for that release. To understand the intricacies of distributing an application we have to first understand the concept of a Global application versus a scoped application.


Typically only applications developed by ServiceNow are in the global scope. However before the Application Scoping feature was released, custom applications also resided in the global scope. This means that other applications can read the application data, make API requests, and change the configuration records.


Scoped applications, which are now the default, are uniquely identified along with their associated artifacts with a namespace identifier. No other application can access the data, configuration records, or the API unless specifically allowed by the application administrator.


While distributing applications, it is easy to do so using update sets if the application has a private scope since there are no challenges with global data dependencies.


The second challenge is with customizations done after distributing an application. There are two possible scenarios.


  • An application release has been distributed (let’s call it 1.0).
  • Customer-1 needs customization in the application (say a blue button is to be added in Form-1). Now customer 1 has 1.0 + Blue Button change.
  • Customer-2 needs different customization (say a red button is to be added in Form-1)
  • The application developer has also done some other changes in the application and plans to release the 2.0 version of the application.

Problem-1: If application 2.0 is released and Customer-1 upgrades to that release, they lose the blue-button changes. They have to redo the blue-button change and retest.



Problem-2: If the developer accepts blue button changes into the application and releases 2.0 with blue button changes, when Customer-2 upgrades to 2.0, they have a conflict of their red button change with the blue-button change.



These two problems can be solved by using versioning control using Git. When the application developers want to accept blue button changes into 2.0 release they can use the Git merge feature to merge the commit of Blue button changes from customer-1 repo into their own repo.


When customer-2 needs to upgrade to 2.0 version they use the Stash feature of Git to store their red button changes prior to the upgrade. After the upgrade, they can apply the stashed changes to get the red button changes back into their instance.


The ServiceNow source control integration allows application developers to integrate with a GIT repository to save and manage multiple versions of an application from a non-production instance.


Using the best practices of DevOps and Version Control with Git it is much easier to deliver software applications to multiple customers while dealing with the complexities of customized versions. To know more about ServiceNow application best practices and DevOps feel free to contact: marketing@relevancelab.com


0