Your address will show here +12 34 56 78
2023 Blog, AI Blog, Blog, Featured

With the rise of Artificial intelligence (AI), many enterprises and existing customers are looking into ways to leverage this technology for their own development purposes and use cases. The field is rapidly attracting investments and efforts for adoption in an iterative manner starting with simple use cases to more complex business problems. In working with early customer, we have found the following themes as the first use cases for GenAI adoption in enterprise context:

  • Interactive Chatbots for simple Questions & Answers 
  • Enhanced Search with Natural Language Processing (NLP) using Document Repositories with data controls
  • Summarization of Enterprise Documents and Expert Advisor Tools

While OpenAI provides a model to build solutions, a number of early adopters are preferring use of Microsoft Azure OpenAI Service for better enterprise features.

Microsoft’s Azure OpenAI Service provides REST API access to OpenAI’s powerful language models the GPT-4, GPT-35-Turbo, and Embeddings model series. With Azure OpenAI, customers get the security capabilities of Microsoft Azure while running the same models as OpenAI. Azure OpenAI offers private networking, regional availability, and responsible AI content filtering, security and governance.

Introduction to GenAI

  1. What is Generative AI?
    • A class of artificial intelligence systems that can create new content, such as images, text, or videos, resembling human-generated data by learning patterns from existing data.
  2. What is the purpose?
    • To create new content or generate responses that are not based on predefined templates or fixed responses.
  3. How does it work?
    • Data is collected through various methods like scrapping and/or read documents/directories or indexes, then data is preprocessed to clean and format it for analysis. AI models, such as machine learning and deep learning algorithms, are trained on this preprocessed data to make predictions or classifications. By learning patterns from existing data and using that knowledge to produce new, original content through models.
  4. How can it be used by enterprises?
    • To assist end users (internal or external) in the form of next generation Chatbots.
    • To assist stakeholders with automating certain internal content creation processes.

Early Customer Adoption Experience
Customers wanted to experience GenAI for building awareness, validation of early use cases, and “testing the waters” with enterprise-grade security and governance for GenAI technology.

Early Use Cases Identified for Development
The primary focus area was in the content management space for enterprise data with focus on the following:

  1. End User Assistance (Chatbot)
    • Product Website Chatbot
    • Intranet Chatbot
  2. Content Creation
    • Document Summarization
    • Template based Document Generation
  3. SharePoint
    • Optical Character Recognition (OCR)
    • Cognitive Search
  4. Decision-making & Insights

Key Considerations for GenAI Leverage

  1. Limitations on current Chatbots
    • OCR
    • Closed chatbot allowing selection of pre-populated options
    • Limited scope and intelligence of responses
  2. Benefits expected from GenAI enhanced Chatbots
    • OCR
    • Human like responses
    • Ability to adapt quickly to new information
    • Multi-lingual
    • Restricts available data that Chatbot can draw from to verified Enterprise sites
  3. Potential Concerns
    • Can contain biases unintentionally learned by the model
    • Potential for errors and hallucinations

System Architecture
The system architecture using Azure Open AI takes advantage of several services provided by Azure.



The architecture may include the following components:

Azure Open AI Services
Azure Open AI Service is a comprehensive suite of AI-powered services and tools provided by Microsoft Azure. It offers a wide range of capabilities, including natural language processing, speech recognition, computer vision, and machine learning. With Azure Open AI Service, developers can easily integrate powerful AI models and APIs into their applications, enabling them to build intelligent and transformative solutions.

Azure Cognitive Services
Azure Cognitive Services offers a range of AI capabilities that can enhance Chatbot interactions. Services like Language Understanding (LUIS), Speech Services, Search Service, Vision Services and Knowledge Mining can be integrated to enable natural language understanding, speech recognition, and knowledge extraction.

Azure Storage
Azure Storage is a highly scalable and secure cloud storage solution offered by Microsoft Azure. It provides durable and highly available storage for various types of data, including files, blobs, queues, and tables. Azure Storage offers flexible options for storing and retrieving data, with built-in redundancy and encryption features to ensure data protection. It is a fundamental building block for storing and managing data in cloud-based applications.

Form Recognizer
Form Recognizer is a service provided by Azure Cognitive Services that uses machine learning to automatically extract information from structured and unstructured forms and documents. By analyzing documents such as invoices, receipts, or contracts, Form Recognizer can identify key fields and extract relevant data. This makes it easier to process and analyze large volumes of documents. It simplifies data entry and enables organizations to automate document processing workflows.

Service Account
A new service account would be required for team to establish connection with Azure services programmatically. The service account will need elevated privileges as needed for APIs to communicate with Azure services.

Azure API Management
Azure API Management provides a robust solution to address hurdles like throttling and monitoring. It facilitates the secure exposure of Azure OpenAI endpoints, ensuring their safeguarding, expeditiousness, and observability. Furthermore, it offers comprehensive support for the exploration, integration, and utilization of these APIs by both internal and external users.

Typical Interaction Steps between Components
The diagram below shows the typical interaction steps between different components.



  1. Microsoft Cognitive Search Engine indexes content from Document Repository as an Async Process.
  2. Using Frontend Application, the user interacts and sends query on Chatbot.
  3. The Azure API forwards the query to GPT Text Model that transforms the user query to an optimized Search Input.
  4. GPT Text Model returns this optimized Search Input to Azure API Orchestration Layer.
  5. API Layer sends Search Query to Cognitive Search.
  6. Cognitive Search returns the Relevant Content.
  7. API Layer sends the result from Cognitive Search with other details like Prompt, Chat context and history to GenAI for Response Generation.
  8. Generated and Summarized content is returned from GenAI.
  9. The meaningful results are shared back to user.

The above interactions clearly demonstrate that in the above architecture the documents remain inside the secure Azure network and are managed by Search engine. This ensured that the raw content is not being shared with OpenAI layer hence providing a controlled governance for data security and privacy.

Summary
Relevance Lab is working with early customers for GenAI Adoption using our AI Compass Framework. The customers’ needs vary from initial concept understanding to deploying with enterprise-grade guardrails and data privacy controls. Relevance Lab has already worked on 20+ GenAI BOTs across different architectures leveraging different LLM Models and Cloud providers with a reusable AI Compass Orchestration solution.

To know more about how we can help you adopt GenAI solutions “The Right Way” feel free to write to us at marketing@relevancelab.com and for a demonstration of the solution at AICompass@relevancelab.com

References
Revolutionize your Enterprise Data with ChatGPT
Augmenting Large Language Models with Verified Information Sources: Leveraging AWS
AWS SageMaker and OpenSearch for Knowledge-Driven Question Answering
What’s Azure Cognitive Search?



0

With the emerging disruption of GenAI, every enterprise is preparing to adopt radical new technology that will drive a major wave of innovation. Relevance Lab with its existing experience of Enterprise AI for business decisions is now planning to leverage GenAI to bring better solutions for our customers. The focus is for enhancing end customers experiences, help generate new growth, and drive internal cost efficiencies with our “AI Compass Framework”.  Click here for the full  story.

0

2023 Blog, AI Blog, AIOps Blog, Blog, Featured

As part of growing interest and attention on GenAI market trends, the priorities for enterprises in 2023 have rapidly shifted from tracking the trends to tremendous pressure of adopting this disruptive technology. While the interest is very high, most enterprises are grappling with the challenge on where to start and what best approach to use. Investments from CIO budgets are being quickly carved out, but the basic dilemma remains on early use cases, security & privacy issues with enterprise data and which platforms & tools to leverage. Relevance Lab has launched an “AI Taskforce” that covers key internal participants and customer advisory teams for this innovation. The primary focus is to define core and priority themes relevant for business and customers based on current assessment. This is an emerging space with a lot of global investments and innovation expected to drive major disruption in the next decade. We believe that requires an iterative model for strategy and an agile approach with focused concept incubations to work in close collaboration with our customers.

Customer Needs for AI Adoption
The most common ask from all customers is using GenAI for their business with the primary goals around the following business objectives:

  • Enhancing their end customer experience and business outcomes.
  • Saving costs with better efficiency leveraging the new AI models & interaction channels.
  • Improving their core Products & Offerings with AI to ensure the business does not get disrupted or irrelevant against competition.

The figure below captures the summary of customer asks, common business problems, and categories of solutions being explored.



Translating the above objectives to meaningful and actionable pursuits require focusing on key friction points and leveraging the power of AI. Some common use cases we have encountered are following:


  • Increasing online-user purchases and conversions by 20% with personalized customer experiences.
  • Better revenue realization with Dynamic Pricing and Propensity analysis.
  • Lesser subscription renewal loss with early detection & engagement with 90%+ Predictability.
  • Wastage reduction (by 10M US$ annual) for global pharma with AI-Led Optimization Algorithms.
  • Better price realization for procurement by 15%+ Anomaly Detection in Plan Purchase Analytics.
  • Better information aggregation and curation for mortgages with Machine Learning (ML) classifications.

Following are early initiatives being taken for our customers leveraging GenAI:

  • Pharma Product Reviews Summary and Advisor with GenAI.
  • Deployment of Private Foundation Models and training with custom data & business rules for Advisory services in Financial Services.
  • Use of Chatbots for easier user and customer support for Media customers.
  • Access to Business Dashboards with Generative Models using prompts for E-Commerce customers in Retail.
  • Increasing productivity of Development and Testing efforts with GenAI specialized tools for Technology ISVs.

There is no doubt that the momentum of such early technology adoption is growing everyday. This needs a structured program for collaboration with our customers to look for common building blocks and rapid models’ creation, training, deployment, interactions, and fine-tuning.

Relevance Lab AI Compass Framework
We have launched the “Relevance Lab AI Compass Framework” to guide and collaborate with customers in defining the early areas of focus in building solutions leveraging AI. The goal is to have this as a prescriptive model helping jumpstart the adoption of Enterprise and GenAI “The Right Way”. The figure below explains the same.

The AI Compass Framework takes a 360 degrees perspective on assessment of AI needs for an enterprise across the following pillars.

  • Product Engineering – building products that embed the power of AI
  • Business Data Decisions enhanced with AI
  • Machine Data Analysis enhanced with AI
  • Using GenAI for Business
  • Platform AI Competences – choosing the right foundation
  • Cloud AI Services – leveraging the best of breed
  • Digital Content with GenAI
  • Robotic Proces Automation enhanced with AI and Intelligent Document Generation
  • Preparing Enterprise Workforce – Training with AI
  • Managed Services & Support made more efficient & cost effective
  • Improving internal Tester and Quality Productivity with AI Tools
  • Developer Productivity enhancements with AI Tools


Relevance Lab is getting deeper into the above pillars and building the right design patterns for guiding our customers on “The Right Way” for enterprise adoption. The plan is also to build a foundation AI applications platform that will speed up adoption for end customers, saving them time, effort and with quality deliverables.

Product Engineering with AI 
This pillar focuses on how to make AI Architecture and Design patterns part of better Product Design. The charter is to find and recommend new architectures and integration with new GenAI models for making existing software products smarter with embedded AI techniques. We expect new products to adopt an “AI-First” approach to new developments. Every product in their focused vertical (Healthcare & Life Sciences, BFSI, Media & Communication, Technology) will need to embed AI into their core architecture.

Business Data Decisions with AI
This pillar defines AI-enhanced Data Engineering for common use cases and building blocks.  The traditional focus of AI initiatives has been on using primarily giving agile & actionable insights to the following: 

  • What happened in my business – this is Informative? 
  • What will happen – this is Predictive? 
  • What should be done – this is Prescriptive? 

The new dimension GenAI has added to the above is around “Generative” capabilities. Along with the need for building new features, there is growing adoption of popular data platforms like Databricks, Snowflake, Azure Data Factory, AWS Data Lake etc. that need to integrate with product specific AI enhancements.

Machine Data Analysis with AI
Customers already have focus on DevOps and AIOps with large data generated from Servers, Applications, Networks, Security, and Storage using different monitoring tools. However, there is a deluge of information and need of reducing noise and improving response times for effective operations support. This needs Alert Intelligence to reduce alert fatigue and incident intelligence to observe data across layers for faster issue diagnosis and fixes. Anomaly detection is a key need with time series data to look for odd patterns and flag risks for security, vulnerabilities, etc. While AIOps brings together the need for AI across Observability, Automation, and Service Delivery there are ways to leverage new GenAI tools for better Chatbots support in reducing operational costs and increasing efficiency. A common ask by customers is about the ability to predict a failure and prevent an outage in real-time with AI using these models. This requires design of Site Reliability Engineering (SRE) solutions to be more effective with AI techniques.

Like Infra and Apps intelligent observability with AI/ML Models, there is a growing need for Data Pipelines Observability with specialized models. With growing scale of ML Models, there is need to track drift across design, model and data for such pipelines with dashboards for visualization and actionable analytics.

Using GenAI for Business
One of the most common asks is to leverage ChatGPT APIs and suggest ways to leverage the disruptive technology for existing customer and internal needs. Leveraging this tool to reduce internal costs and improve external end customer experience with quick projects to define common use cases and how to get deeper with customer’s specific personal data and models.        

We are working with early adopter customers on how to prepare and leverage GenAI for their business problems across different verticals. All large enterprises have carved out special initiatives on “How to Use GenAI” and we offer a unique program to incubate these projects.

Platform AI Competencies 
These platforms are leading innovation and solutions for companies to build specialized applications leveraging AI in areas of Open Source LLMs (Large Language Models), OpenAI APIs, Reusable models library, TensorFlow, Hugging Face, Open-Source LangChain Library, Microsoft Orca, Databricks, etc. This pillar gets deep into specialized use cases for feature extraction, text classification, prompt engineering, Chatbots, Summarisation, Generative Writing, Ideation, Reinforced Learning etc.

Cloud AI Services 
With significant existing investments of customers on public cloud providers like AWS, Azure, and GCP there is a growing need for leveraging specialized AI offerings from these providers to jumpstart adoption with security and scalability in enterprise context. Also, there is a growing momentum of new GenAI solutions from these providers like AWS CodeWhisperer, Amazon Bedrock, Azure Synapse, Microsoft Responsible AI, and specialized tools & training from Google Cloud. The growing adoption will require deep understanding and support for MLOps and LLMOps for efficient and cost-effective operations. 

Digital Content with GenAI 
One of the biggest impacts with GenAI is the evolution of smarter search and information access across customers’ existing repositories of documents, FAQ, content platforms, product brochures etc. These cover all sorts of unstructured and semi-structured information. Customers are looking to leverage Public and Proprietary LLM (Large Language Models) Models with their personal data repositories and fine-tuned models of custom business rules. This requires customers to build, train, and deploy their own models with control on security and data privacy protected.

The right architectures will have a balance between different approaches of using standard models with enterprise data vs privately deployed models for enterprise content solutions.

Managed Services & Support AI 
Chatbots and GenAI can help improve the Support Lifecycle of Monitoring, ServiceDesk, TechOps, Desktop Support, User Onboarding/Offboarding. They can help in cost reduction and become more efficient in daily tasks.       

This aligns with customers focus on Managed Services, ServiceDesk, Command Centre, Technical Operations, and Security Ops. This pillar looks deeper into exploring AI techniques for Incident Intelligence, Chatbots, Automation, Self-Remediation, and Virtual Agents to be more productive and efficient. Relevance lab has leveraged “Automation-First” approach for greater productivity, effective operations & compliance.  

Robotic Process Automation with Intelligence
RPA (Robotic Process Automation) is bringing significant gains for business process automation in areas of repetitive & high frequency tasks along with better quality & compliances for use cases across different industries and corporate functions. With AI, a lot of additional benefits can be achieved for making business process frictionless. This pillar focuses on specialized use cases related to AI-Driven BOTs, Data & Documents Processing, Intelligent Decisioning by leveraging AI tools from key partners like UiPath & Automation Anywhere. 

Training with AI Technology
Companies are embarking on the goal to make all their employees AI skilled and certified. Leveraging AI tools in everybody’s day-to-day charter will improve the job efficiency. This requires setting up an AI-Lab for internal trainings and certifications. To create such a strong foundation, this pillar is looking into creating an AI Academy and have a program that drives “Self-Service Learning” and “Accreditation” based on a structured program.

Developer & Testing Productivity with AI Tools 
Adoption of AI and GenAI tools are key goals for smarter, faster, better outcomes. For testers, the specific areas of focus are around Automated Test Case Generation, Integration Test Generation, Security Co-Pilot, Performance Assessment, Simulated Data Gen. For developers, similar plans for boosting productivity using Developer Co-Pilot, Auto-Unit Tests, GenAI Code Assist, Compliance AI.

Co-Development Opportunities with Customers 
As part of expediting the innovation in this emerging area, we are launching a co-development program with early participants to build on use cases specific to customer verticals and domain needs. We have dedicated specialized teams working on deep GenAI and Enterprise AI skills and building re-usable components. We are offering a special six-week program for incubation and jumpstart of GenAI adoption by enterprises to build one specific use case.

To know more about how to collaborate and sharing your ideas for GenAI early adoption, contact us at AICompass@relevancelab.com

References
AI Foundation Model: Generative AI on AWS
Azure OpenAI on your Data
Google Generative AI Service Offerings Designed to get you up and Running Fast
Revolutionize your Enterprise Data with ChatGPT
A CIO and CTO Technology Guide to Generative AI



0

2023 Blog, Blog, Cloud Blog, Featured

Currently, all large enterprises are dealing with multi-cloud providers and the situation is more complicated where M&A has led to multiple organizations integrations and multiple vendors across Infrastructure, Digital, Enterprise Systems, and Collaboration tools bring their own Cloud footprints bundled with services. In this blog, we try to explain the common scenario being faced by large companies and how to create “The Right Way” to adopt a scalable Multi-Cloud Workload Planning and Governance Models.

Customer Needs
The customers facing such challenges usually share with us the following brief:

  • Assess existing workloads on AWS, Azure, GCP for basic health & maturity diagnostics.
  • Suggest a mature Cloud Management & Governance model for ensuring “The Right Way” to use the Cloud for multi-account, secure, and compliant best practices.
  • Recommend a model for future workloads migration and choice of cloud providers for optimal usage and ability to move new workloads to cloud easily.

Primary Business Drivers
Following are the key reasons for customers seeking Multi-Cloud Governance “The Right Way.”

  • Cost optimization and tracking for existing usage.
  • Ability to launch new regions/countries in cloud with easy and secure standardized processes.
  • Bring down cost of ownership on Cloud Assets – Infra/Apps/Managed Services with leverage of Automation and best practices.

Approach Taken
The basic approach followed for helping customers through the multi-cloud maturity models involves a PLAN-BUILD-RUN process as explained below:

Step-1: Planning & Assessment Phase
This involves working with customer teams to finalize the Architecture, Scope, Integration and Validation Needs for Cloud Assessment. The primary activities covered under this phase are following:

  • Coverage Analysis
    • Do a detailed analysis of all three Cloud Providers (AWS, Azure, GCP) and recommend what should be an ongoing strategy for Cloud Provider adoption.
  • Maturity Analysis
    • Do an assessment of current Cloud usage against industry best practices and share the maturity scorecard of customer setup.
  • Security Exposure
    • Find key gaps on security exposure and suggest ways for better governance.
  • Cost Assessment
    • Consolidation and cost optimization to have more efficient cloud adoption.

The foundation for analysis covers Cloud Provider specific analysis based on Well-Architected Frameworks as explained in the figure below:



Step-2: Build & Operationalize Phase
This primarily involves adoption of mature Cloud Governance360 and Well-Architected Models with best practices across key areas.

  • Accounts & Organization Units
  • Guardrails
  • Workloads Migration
  • Monitoring, Testing, Go-Live & Training
  • Documentation, Basic Automation for Infrastructure as Code
  • SAML Integration

The playbook for Build & Operationalize phase is based on Relevance Lab prescriptive model for using Cloud “The Right Way” as explained in the figure below.



Step-3: Ongoing Managed Services Run Phase
Post go-live on-going managed services ensure that the best practices created as part of foundation are implemented and “Automation-First” approach is used for Infrastructure, Governance, Security, Cost Tracking and Proactive Monitoring. Common activities under Run phase cover regular tasks a snapshot of what is provided below:

Daily Activities:

  • Monitoring & Reporting – App & Infra using CloudWatch – Availability, CPU, Memory, Disk Space, Security blocked requests details, Cost using Cost Explorer.
  • Alert acknowledgement and Incident handling.
  • Publish daily report.

Weekly Activities:

  • Check Scan Reports for most recent critical vulnerabilities.
  • Monitor Security Hub for any new critical non-compliances.
  • Plan of action to address the same.

Monthly Activities:

  • Patch Management.
  • Budgets Vs Costs Report.
  • Clean-up of stale/inactive users/accounts.
  • Monthly Metrics.

ServiceOne framework from Relevance Lab provides a mature Managed Services Model.

Sample Assessment Report
The analysis is done across 4 key areas as covered under Plan phase and explained below.

  • Cloud Provider Specific Analysis
    • Workload distribution analysis across all three providers, also mapped to 50+ different Best Practices Questionnaire.
  • 5-Pillars Well-Architected Analysis
    • Architecture & Performance Efficiency, Cost Optimization, Reliability & DR, Operational Excellence & Standardization, Security.
    • Global workloads analyzed across all different environments.
  • Security Findings
    • Identified Environments on Azure with significant exposure that needs fix.
    • Also suggested AWS Security Hub for formal scorecard and specific steps for maturity.
  • Cost Optimization
    • Analyzed costs across Environments, Workloads, and Apps.

Based on the above a final Assessment report is created with recommendations to fix immediate issues while also addressing medium term changes for ongoing maturity. The figure below shows a sample assessment report.



Summary
Relevance Lab is a specialist company in cloud adoption and workload planning. Working with 50+ customers on multiple engagements, we have created a mature framework for Multi-Cloud Workload and Governance Assessment. It is built on the foundation of best practices for Cloud Adoption Framework (CAF) and Well-Architected Frameworks (WAF) but enhanced with specific learnings and accelerators based on Goverenance360 and ServiceOne offerings to speed up a transition from un-managed & ad-hoc models to “The Right Way” of multi-cloud foundation.

To know more on how we can help feel free to contact us at marketing@relevancelab.com

References
AWS Well-Architected
Microsoft Azure Well-Architected Framework
Google Cloud Architecture Framework
AWS Cloud Adoption Framework (AWS CAF)
Microsoft Cloud Adoption Framework for Azure
Google Cloud Adoption Framework



0

Relevance Lab is an Automation specialist company providing BOTs and Platforms for Business Processes, Applications, and Infrastructure. Our solutions leverage leading RPA (Robotic Process Automation) tools like UiPath, Automation Anywhere & Blue Prism. We provide re-usable templates for common use cases across Finance & Accounting, HR, IT, and Sales process automation.  Click here for the full  story.

0

AI Blog, 2023 Blog, Blog, BOTs Blog, Featured

Relevance Lab is an Automation specialist company providing BOTs and Platforms for Business Processes, Applications, and Infrastructure. Our solutions leverage leading RPA (Robotic Process Automation) tools like UiPath, Automation Anywhere & Blue Prism. We provide re-usable templates for common use cases across Finance & Accounting, HR, IT, and Sales process automation.

By leveraging our robotic process automation services, our clients have realized:

  • 60-80% cost savings
  • 2-3x increase in process speed
  • 35-50% increase in employee productivity
  • Upto 30% FTE (Full Time Equivalent Headcount Reduction)

The biggest challenge in adoption of RPA for our customers primarily comes in identifying “where to start” dilemma. To help identify “what can be automated” we have designed the following guidelines to help with initial use cases for implementation:

  • High frequency and volume workflows
  • High complexity processes
  • High error prone and human task quality related areas
  • Domains with compliance needs with benefits of automated outcomes

Using these broad guidelines across a set of corporate functions we have commonly encountered the following use cases for RPA.

Finance & Accounting Automation

  • Stock Price Update
  • Purchase Order Process
  • Reconciliation Process
  • Payment Process
  • Financial and Loan Origination Process
  • Lease Accounting Process
  • Journal Process
  • Inventory Control Process
  • Error Audit Process
  • Invoice Process

Human Resources (HR) Automation

  • New Hire Onboarding Process
  • Data Approval Process
  • The Policy Processing (TPP)
  • Off-boarding Process
  • Legacy (AS/400) Process
  • Document Handling
  • Employee/HR/IT Process
  • User and Workspace- Employee/Contractor Offboarding
  • Back to Office (COVID) workflow automation and compliances

Infrastructure (IT) Management Automation

  • Distribution List Process
  • User Account Re-conciliation Process
  • Mailbox Automation & Reconciliation Process
  • User Migration & Access Control Verification Process
  • Logs Capture

Sales Automation

  • Contract Data Extraction
  • Sales Reporting
  • Sales Reconciliation Process
  • Material Edits Adjustments

With our comprehensive suite of RPA services, we have not only helped businesses adopt, but also maximize their investments in RPA.

The figure below explains the RPA Top Use Cases solved by Relevance Lab.



RL RPA Offerings
RPA Consulting/Assessment: RPA consulting and assessment is the process of evaluating organization’s processes and identifying opportunities for automation. It is essential for ensuring that RPA implementation is successful.

RPA Implementation: RPA implementation is the process of deploying and using RPA bots to automate processes. It is essential for realizing the benefits of RPA.

Automation Design: Automation design is the process of designing and implementing automation solutions. It involves understanding the business needs, identifying the processes that are suitable for automation, and designing and implementing the automation solutions.

Automation Support: Automation support is the process of providing support to users of automation solutions. It involves providing help with troubleshooting problems, resolving issues, and providing training on how to use the automation solutions.

The figure below explains our core offerings.



Relevance Lab “Automation-First” RPA Platform Architecture
Applications under Robotic Process Execution
RPA is well suited for enterprises and enterprise applications like ERP solutions (For example, SAP, Siebel, or massive data processing or records processing applications like Mainframes). Most of these applications are data-centric and also data-intensive with loads and loads of setup and repetitive process activities.

RPA Tools

  • It has the ability to automate any type of application in any environment.
  • Develop software robots that understand recordings, configuring, and enhancing these with programming logic.
  • Build reusable components which can further be applied to multiple robots, ensuring modularity and faster development and at the same time easier maintenance.

RPA Platforms
Ability to develop meaningful analytics about robots and their execution statistics.

RPA BOT Workbench
RPA execution infrastructure can sometimes be a bank of parallel physical or virtual lab machines, which can be controlled based on usage patterns. Scaling up or down the number of machines in parallel to achieve the task of automation can also be done, and this can be left unattended for as long as you like (as this requires no further human interaction or intervention).

The figure below explains the Relevance Lab “Automation-First” RPA Platform Architecture.



How to get started for new customers?

  • Reach out to Relevance Lab (write to marketing@relevancelab.com) for a quick discussion and demonstration of the standard solution
  • We will study the processes and help in identifying repetitive and manual tasks
  • Engage in creation of POC while selecting the right RPA Tool
  • Customers with standard needs can get started with a new setup in 4-6 weeks
  • Relevance Lab will also provide on-going support and managed services


Summary
Relevance Lab Automation at a Glance

  • RL has been Automation Specialist since 2016 (7+ Years).
  • Implemented 30+ successful customer automation projects covering RPA lifecycle.
  • Globally has 60+ RPA specialists with 150+ certifications.
  • Automated over 100+ processes, which includes customized solutions for industries like across Healthcare, BFSI, Retail and Technology Services & Manufacturing.

References
CoE Manager|Automation Anywhere
Build Your Robotic Process Automation Center of Excellence (uipath.com)



0

Secure Research Environments provide researchers with timely and secure access to sensitive research data, computation systems, and common analytics tools for speeding up Scientific Research in the cloud. Researchers are given access to approved data, enabling them to collaborate, analyze data, share results within proper controls and audit trails.  Click here for the full  story.

0

2023 Blog, Research Gateway, Blog, Command blog, Feature Blog, Featured

Secure Research Environments provide researchers with timely and secure access to sensitive research data, computation systems, and common analytics tools for speeding up Scientific Research in the cloud. Researchers are given access to approved data, enabling them to collaborate, analyze data, share results within proper controls and audit trails. Research Gateway provides this secure data platform with the analytical and orchestration tools to support researchers in conducting their work. Their results can then be exported safely, with proper workflows for submission reviews and approvals.

The Secure Research Environments build on the original concept of Trusted Research Environment defined by UK NHS and uses the five safes framework for safe use of secure data. The five elements of the framework are:

  • Safe people
  • Safe projects
  • Safe settings
  • Safe data
  • Safe outputs

There are the following key building blocks for the solution:

  • Data Ingress/Egress
  • Researcher Workflows & Collaborations with costs controls
  • On-going Researcher Tools Updates
  • Software Patching & Security Upgrades
  • Healthcare (or other sensitive) Data Compliances
  • Security Monitoring, Audit Trail, Budget Controls, User Access & Management

The figure below shows implementation of Secure Research Environments solution with Research Gateway.



The basic concept is to design a secure data enclave in which there is no ability to transfer data into or out of without going through pre-defined workflows. Within the enclave itself any amount or type of storage/compute/tools can be provisioned to fit the researcher’s needs. There is capability to use common research data and also bring in specific data by researchers.

The core functionality for Secure Research Environments deal with solutions for the following:
Data Management and Preparation
This deals with “data ingress management” from both public and private sources for research. There are functionalities dealing with data ingestion, extraction, processing, cleansing, and data catalogs.

Study Preparation
Depending on the type of study and participants from different institutions, secure data enclave allows for study specific data preparation, allocation, access management and assignment to specific projects.

Secure Research Environment
A controlled cloud environment is provided for researchers to access the study data in a secure manner with no direct ingress-egress capability and conduct research using common tools like JupyterLab, RStudio, VSCode etc. for both interactive and batch processing. The shared study data is pre-mounted on research workspaces making it easy for researchers to focus on analysis without getting into complexity of infrastructure, tools and costs.

Secure Egress Approvals for Results Sharing
Post research if researchers want to extract results from the secure research environment, a specialized workflow is provided for request, review, approvals, and download of data with compliance and audit trails.

The Secure Research Environments Architecture provides for Secure Ingress and Egress controls as explained in the figure below.



Building Block Detailed Steps
Data Management
  • Project Administrator creates the Data Library and research projects.
  • Project Administrator selects the Data Library project.
    • Sets up Study Bucket.
    • Creates the sub-folders to hold data.
    • Sets up an Ingress bucket for each researcher to bring in his own data.
    • Shares this with the researcher.
  • Project Administrator selects the Study screen.
    • Creates an internal study for each dataset and assign to the corresponding Secure Research project.
    • Creates internal study for each ingress bucket.
  • Project Administrator assigns the researchers to the corresponding secure projects.
Secure Research Environments
  • Researcher logs in.
  • Research uploads own data to ingress bucket.
  • Researcher creates a workspace (secure research desktop).
  • Researcher connects to workspace.
  • Researcher runs code and generates output.
  • Researcher copies output to egress store.
  • Researcher submits and egress request from the portal.
Egress Application
  • Information Governance lead logs in to Egress portal.
  • IG Lead approves request.
  • Project administrator logs in to portal.
  • Project administrator approves the request.
  • IG Lead logs in and downloads the file.

The need for Secure Research Enclave is a growing one across different countries. There is an emerging need for a consortium model, where multiple Data Producers and Consumers need to interact in a Secure Research Marketplace Model. The marketplace model is implemented on AWS Cloud and provides for tracking of costs and billing for all participants. The solution can be hosted by a third-party and provide Software as a Service (SaaS) model driving the key workflows for Data Producers and Data Consumers as explained in figure below.



Summary
Secure Research Environments are key features for enabling large institutions and governmental agencies to speed up research across different stakeholders leveraging the cloud. Relevance Lab provides a pre-built solution that can speed up the implementation of this large scale and complex deployment in a fast, secure, and cost-effective manner.

Here is a video demonstrating the solution.

To know more about this solution, feel free to write to marketing@relevancelab.com.

References
UK Health Data Research Alliance – Aligning approach to Trusted Research Environments
Trusted (and Productive) Research Environments for Safe Research
Deployment of Secure Data Environments on AWS
Microsoft Azure TRE Solution



0

With growing needs for adopting cloud to speed up scientific research, higher-ed institutions are looking to democratize research with self-service portals for data science. To know more about how Relevance Lab is partnering with AWS Public sector group and some leading US universities to create a frictionless research platform leveraging open-source solutions. Click here for the full  story.

0

PREVIOUS POSTSPage 2 of 25NEXT POSTS