error saving file!error saving file!error saving file!error saving file! RL Admin, Author at Relevance Lab

Your address will show here +12 34 56 78

With growing needs for adopting cloud to speed up scientific research, higher-ed institutions are looking to democratize research with self-service portals for data science. To know more about how Relevance Lab is partnering with AWS Public sector group and some leading US universities to create a frictionless research platform leveraging open-source solutions click here.  Click here for the full  story.

0

2023 Blog, Blog, Featured

Research computing is a growing need and AWS cloud enables researchers to process big data with scalable computing in a secure and flexible manner. While Cloud computing is a powerful platform it also brings in complexity with new tools, nomenclature and architectures that distract researchers to focus on infrastructure, security, governance, and costs. Relevance Lab is partnering with AWS Public sector group and some leading US universities to create a frictionless research platform leveraging open-source solutions.

Service Workbench from AWS is a powerful open-source solution for enabling research in cloud. Customers around the globe are already using this solution for common use cases.

  • Enable researchers to use AWS Cloud with Self-service capabilities and common catalog of tools like EC2, SageMaker, S3, Studies data etc.
  • Use common Data Analysis tools like RStudio in a secure and scalable manner.
  • Setup a “Trusted Research Environment” in cloud for research with additional controls that enforce Ingress/Egress data restrictions for compliance.


While Service Workbench Current Version 5.2.1 provides a good foundation platform for research it also has some challenges based on feedback from early adopters mainly related to following:

  • Complex setup requiring deep cloud know-how and an Admin centric User Experience not very Researcher friendly.
  • Scalability challenges while deploying in large scale research deployments.
  • Hard to customize and no enterprise support models available to guide customer through a Plan-Build-Run lifecycle.


To fix some of the known limitations of Service Workbench a new Version release is planned and available for preview. The primary benefits of this new release are following:

  • Upgraded Architecture and an API driven model for better scalability and flexibility.
  • Extendable architecture for Partner solutions to be easily created for customer specific needs and enterprise support options.
  • Modern and Researcher friendly User Experience based on a “Researcher Portal” built by Relevance Lab for an early adopter customer and available as Open-Source solution.


Functionality of Researcher Portal build on Service Workbench Open source
Working closely with AWS teams and a customer self-service Researcher Portal created with the primary goal to drive frictionless research in cloud with following key features:

  • To be built as an open-source solution and made available to a consortium of institutions interested in collaborating on a common Data Science Platform for research.
  • The focus to have “Project Centric” model enabling collaboration of researchers with common data, tools, and research goals in a self-service manner.
  • Modern architecture with support for containers enabling researchers to bring their own tools covering Web based software, Desktop based tools and Terminal based solutions seamlessly access from Researcher Portal
  • Enable researchers to launch applications and choose configurations without knowledge of Cloud Infrastructure details for both regular and GPU workloads.
  • Integrated with Datasets for research that are project centric and with a browsed based easy interface to upload/download data for research.
  • Ability to run multiple research projects across different AWS accounts with secure and scalable setup and guardrails.

The key functions flows needed for a Researcher are explained in the figure below:



Solution Architecture of Researcher Portal
The building blocks for the solution leverage the new Service Workbench release available with API only functionality and creates a separate Researcher Portal (RP) layer for providing a UI driven application to Researchers roles and a CLI interface for Admin users. The figure below captures the building blocks for this solution.



Deployment Architecture of Researcher Portal
The solution is deployed in an enterprise model for each customer in their AWS accounts and recommends the following architecture based on AWS Well Architected Framework as explained in figure below.



Sample screens for Researcher Portal
The key functionality for the solution is explained in some sample screens below.


0

2023 Blog, Blog, BOTs Blog, Feature Blog, Featured, RLCatalyst Blog

With growing interest & investments in new concepts like Automation and Artificial Intelligence, the common dilemma for enterprises is how to scale these for significant impacts to their relevant context. It is easy to do a small proof of concept but much harder to make broader impacts across the landscape of Hybrid Infrastructure, Applications and Service Delivery models. Even more complex is Organizational Change Management for underlying processes, culture and “Way of Working”. There is no “Silver bullet” or “cookie-cutter” approach that can give radical changes, but it requires an investment in a roadmap of changes across People, Process and Technology. RLCatalyst solution from Relevance Lab provides an Open Architecture approach to interconnect various systems, applications, and processes like the “Enterprise Service Bus” model.

What is Intelligent Automation?
The key building blocks of automation depend on the concept of BOTs. So, what are BOTs?


  • BOTs are automation codes managed by ASB orchestration
    • Infrastructure creation, updation, deletion
    • Application deployment lifecycle
    • Operational services, tasks, and workflows – Check, Act, Sensors
    • Interacting with Cloud and On-prem systems with integration adapters in a secure and auditable manner
    • Targeting any repetitive Operations tasks managed by humans that are frequent, complex (time-consuming), security/compliance related

  • What are types of BOTs?
    • Templates – CloudFormation, Terraform, Azure Resource Models, Service Catalog
    • Lambda functions, Scripts (PowerShell/python/shell scripts)
    • Chef/Puppet/Ansible configuration tools – Playbooks, Cookbooks, etc.
    • API Functions (local and remote invocation capability)
    • Workflows and state management
    • UIBOTs (with UiPath, etc.) and un-assisted non-UI BOTs
    • Custom orchestration layer with integration to Self-Service Portals and API Invocation
    • Governance BOTs with guardrails – preventive and corrective

  • What do BOTs have?
    • Infra as a code stored in source code configuration (GitHub, etc.)
    • Separation of Logic and Data
    • Managed Lifecycle (BOTs Manager and BOTs Executors) for lifecycle support and error handling
    • Intelligent Orchestration – Task, workflow, decisioning, AI/ML


To deploy BOTs across the enterprise and benefit from more sophisticated automation leveraging AI (Artificial Intelligence), RLCatalyst provides a prescriptive path to maturity as explained in the figure below.


ASB Approach
An Open- Architecture approach to interconnect various systems, applications, and processes similar to the “Enterprise Service Bus” model. This innovative approach of “software-defined” models, extendable meta-data for configurations, and a hybrid architecture takes into consideration modern distributed security needs. This ASB model helps to drive “Touchless Automation” with pre-built components and rapid adoption by existing enterprises.

To support a flexible deployment model that integrates with current SAAS (Software as a Service) based ITSM Platforms allows Automation to be managed securely inside Cloud or On-Premise data centers. The architecture supports a hybrid approach with multi-tenant components along with secure per instance-based BOT servers managing local security credentials. This comprehensive approach helps to scale Automation from silos to enterprise-wide benefits of human effort savings, faster velocity, better compliance and learning models for BOT efficiency improvements.


RLCatalyst provides solutions for enterprises to create their version of an Open Architecture based AIOps Platform that can integrate with their existing landscape and provide a roadmap for maturity.


  • RLCatalyst Command Centre “Integrates” with different monitoring solutions to create an Observe capability
  • RLCatalyst ServiceOne “Integrates” with ITSM solutions (ServiceNow and Freshdesk) for the Engage functionality
  • RLCatalyst BOTs Engine “Provides” a mature solution to “Design, Run, Orchestrate & Insights” for Act functionality

Relevance Lab is working closely with leading enterprises from different verticals of Digital Learning, Health Sciences & Financial Asset Management in creating a common “Open Platform” that helps bring Automation-First approach and a maturity model to incrementally make Automation more “Intelligent”.

For more information feel free to contact marketing@relevancelab.com

References
Get Started with Building Your Automation Factory for Cloud
Intelligent Automation For User And Workspace Onboarding
Intelligent Automation with AS/400 based Legacy Systems support using UiPath
RLCatalyst BOTs Service Management connector for ServiceNow


0

2023 Blog, AWS Service, Blog, Feature Blog, Featured

Major advances are happening with the leverage of Cloud Technologies and large Open Data sets in the areas of Healthcare informatics that include sub-disciplines like Bioinformatics and Clinical Informatics. This is being rapidly adopted by Life Sciences and Healthcare institutions in commercial and public sector space. This domain has deep investments in scientific research and data analytics focussing on information, computation needs, and data acquisition techniques to optimize the acquisition, storage, retrieval, obfuscation, and secure use of information in health and biomedicine for evidence-based medicine and disease management.

In recent years, genomics and genetic data have emerged as an innovative areas of research that could potentially transform healthcare. The emerging trends are for personalized medicine, or precision medicine leveraging genomics. Early diagnosis of a disease can significantly increase the chances of successful treatment, and genomics can detect a disease long before symptoms present themselves. Many diseases, including cancers, are caused by alterations in our genes. Genomics can identify these alterations and search for them using an ever-growing number of genetic tests.

With AWS, genomics customers can dedicate more time and resources to science, speeding time to insights, achieving breakthrough research faster, and bringing life-saving products to market. AWS enables customers to innovate by making genomics data more accessible and useful. AWS delivers the breadth and depth of services to reduce the time between sequencing and interpretation, with secure and frictionless collaboration capabilities across multi-modal datasets. Also, you can choose the right tool for the job to get the best cost and performance at a global scale— accelerating the modern study of genomics.

Relevance Lab Research@Scale Architecture Blueprint
Working closely with AWS Healthcare and Clinical Informatics teams, Relevance Lab is bringing a scalable, secure, and compliant solution for enterprises to pursue Research@Scale on Cloud for intramural and extramural needs. The diagram below shows the architecture blueprint for Research@Scale. The solution offered on the AWS platform covers technology, solutions, and integrated services to help large enterprises manage research across global locations.


Leveraging AWS Biotech Blueprint with our Research Gateway
Use case with AWS Biotech Blueprint that provides a Core template for deploying a preclinical, cloud-based research infrastructure and optional informatics software on AWS.

This Quick Start sets up the following:

  • A highly available architecture that spans two availability zones
  • A preclinical virtual private cloud (VPC) configured with public and private subnets according to AWS best practices to provide you with your own virtual network on AWS. This is where informatics and research applications will run
  • A management VPC configured with public and private subnets to support the future addition of IT-centric workloads such as active directory, security appliances, and virtual desktop interfaces
  • Redundant, managed NAT gateways to allow outbound internet access for resources in the private subnets
  • Certificate-based virtual private network (VPN) services through the use of AWS Client VPN endpoints
  • Private, split-horizon Domain Name System (DNS) with Amazon Route 53
  • Best-practice AWS Identity and Access Management (IAM) groups and policies based on the separation of duties, designed to follow the U.S. National Institute of Standards and Technology (NIST) guidelines
  • A set of automated checks and alerts to notify you when AWS Config detects insecure configurations
  • Account-level logging, audit, and storage mechanisms are designed to follow NIST guidelines
  • A secure way to remotely join the preclinical VPC network is by using the AWS Client VPN endpoint
  • A prepopulated set of AWS Systems Manager Parameter Store key/value pairs for common resource IDs
  • (Optional) An AWS Service Catalog portfolio of common informatics software that can be easily deployed into your preclinical VPC

Using the Quickstart templates, the products were added to AWS Service Catalog and imported into RLCatalyst Research Gateway.



Using the standard products, the Nextflow Workflow Orchestration engine was launched for Genomics pipeline analysis. Nextflow helps to create and orchestrate analysis workflows and AWS Batch to run the workflow processes.

Nextflow is an open-source workflow framework and domain-specific language (DSL) for Linux, developed by the Comparative Bioinformatics group at the Barcelona Centre for Genomic Regulation (CRG). The tool enables you to create complex, data-intensive workflow pipeline scripts, and simplifies the implementation and deployment of genomics analysis workflows in the cloud.

This Quick Start sets up the following environment in a preclinical VPC:

  • In the public subnet, an optional Jupyter notebook in Amazon SageMaker is integrated with an AWS Batch environment.
  • In the private application subnets, an AWS Batch compute environment for managing Nextflow job definitions and queues and for running Nextflow jobs. AWS Batch containers have Nextflow installed and configured in an Auto Scaling group.
  • Because there are no databases required for Nextflow, this Quick Start does not deploy anything into the private database (DB) subnets created by the Biotech Blueprint core Quick Start.
  • An Amazon Simple Storage Service (Amazon S3) bucket to store your Nextflow workflow scripts, input and output files, and working directory.

RStudio for Scientific Research
RStudio is a popular IDE, licensed either commercially or under AGPLv3, for working with R. RStudio is available in a desktop version or a server version that allows you to access R via a web browser.

After you’ve analyzed the results, you may want to visualize them. Shiny is a great R package, licensed either commercially or under AGPLv3, that you can use to create interactive dashboards. Shiny provides a web application framework for R. It turns your analyses into interactive web applications; no HTML, CSS, or JavaScript knowledge is required. Shiny Server can deliver your R visualization to your customers via a web browser and execute R functions, including database queries, in the background.

RStudio is provided as a standard catalog item in Research Gateway for 1-Click deployment and use. AWS provides a number of tools like AWS Athena, AWG Glue, and others to connect to datasets for research analysis.

Benefits of using AWS for Clinical Informatics

  • Data transfer and storage
  • The volume of genomics data poses challenges for transferring it from sequencers in a quick and controlled fashion, then finding storage resources that can accommodate the scale and performance at a price that is not cost-prohibitive. AWS enables researchers to manage large-scale data that has outpaced the capacity of on-premises infrastructure. By transferring data to the AWS Cloud, organizations can take advantage of high-throughput data ingestion, cost-effective storage options, secure access, and efficient searching to propel genomics research forward.

  • Workflow automation for secondary analysis
  • Genomics organizations can struggle with tracking the origins of data when performing secondary analyses and running reproducible and scalable workflows while minimizing IT overhead. AWS offers services for scalable, cost-effective data analysis and simplified orchestration for running and automating parallelizable workflows. Options for automating workflows enable reproducible research or clinical applications, while AWS native, partner (NVIDIA and DRAGEN), and open source solutions (Cromwell and Nextflow) provide flexible options for workflow orchestrators to help scale data analysis.

  • Data aggregation and governance
  • Successful genomics research and interpretation often depend on multiple, diverse, multi-modal datasets from large populations. AWS enables organizations to harmonize multi-omic datasets and govern robust data access controls and permissions across a global infrastructure to maintain data integrity as research involves more collaborators and stakeholders. AWS simplifies the ability to store, query, and analyze genomics data, and link with clinical information.

  • Interpretation and deep learning for tertiary analysis
  • Analysis requires integrated multi-modal datasets and knowledge bases, intensive computational power, big data analytics, and machine learning at scale, which, historically can take weeks or months, delaying time to insights. AWS accelerates the analysis of big genomics data by leveraging machine learning and high-performance computing. With AWS, researchers have access to greater computing efficiencies at scale, reproducible data processing, data integration capabilities to pull in multi-modal datasets, and public data for clinical annotation—all within a compliance-ready environment.

  • Clinical applications
  • There are several hindrances that impede the scale and adoption of genomics for clinical applications including speed of analysis, managing protected health information (PHI), and providing reproducible and interpretable results. By leveraging the capabilities of the AWS Cloud, organizations can establish a differentiated capability in genomics to advance their applications in precision medicine and patient practice. AWS services enable the use of genomics in the clinic by providing the data capture, compute, and storage capabilities needed to empower the modernized clinical lab to decrease the time to results, all while adhering to the most stringent patient privacy regulations.

  • Open datasets
  • As more life science researchers move to the cloud and develop cloud-native workflows, they bring reference datasets with them, often in their own personal buckets, leading to duplication, silos, and poor version documentation of commonly used datasets. The AWS Open Data Program (ODP) helps democratize data access by making it readily available in Amazon S3, providing the research community with a single documented source of truth. This increases study reproducibility, stimulates community collaboration, and reduces data duplication. The ODP also covers the cost of Amazon S3 storage, egress, and cross-region transfer for accepted datasets.

  • Cost optimization
  • Researchers utilize massive genomics datasets, which require large-scale storage options and powerful computational processing and can be cost-prohibitive. AWS presents cost-saving opportunities for genomics researchers across the data lifecycle—from storage to interpretation. AWS infrastructure and data services enable organizations to save time, money, and devote more resources to science.

Summary
Relevance Lab is a specialist AWS partner working closely in Health Informatics and Genomics solutions leveraging AWS existing solutions and complementing them with its Self-Service Cloud Portal solutions, automation, and governance best practices.

To know more about how we can help standardize, scale, and speed up Scientific Research in Cloud, feel free to contact us at marketing@relevancelab.com.

References
AWS Whitepaper on Genomics Data Transfer, Analytics and Machine Learning
Genomics Workflows on AWS
HPC on AWS Video – Running Genomics Workflows with Nextflow
Workflow Orchestration with Nextflow on AWS Cloud
Biotech Blueprint on AWS Cloud
Running R on AWS
Advanced Bioinformatics Workshop



0

AWS Cloud provides the right platform to scale Health Informatics and Genomic Research with security, data privacy, and cost-effectiveness. Relevance Lab offers a scalable architecture blueprint with Research Gateway and pre-built support for common researcher tools like BioInformatics starter kit, Nextflow, R-Studio, and Open Datasets to speed up Scientific Research on Cloud.

Click here
 for the full  story.

0

2023 Blog, Blog, BOTs Blog, DevOps Blog, Featured

With growing interest & investments in new concepts like Automation and Artificial Intelligence, the common dilemma for enterprises is how to scale these for significant impacts to their relevant context. It is easy to do a small proof of concept but much harder to make broader impacts across the landscape of Hybrid Infrastructure, Applications and Service Delivery models. Even more complex is Organizational Change Management for underlying processes, culture and “Way of Working”. There is no “Silver bullet” or “cookie-cutter” approach that can give radical changes but it requires an investment in a roadmap of changes across People, Process and Technology.


Relevance Lab has been working closely with leading enterprises from different verticals of Digital Learning, Health Sciences & Financial Asset Management on creating a common “Open Platform” that helps bring Automation-First approach and a maturity model to incrementally make Automation more “Intelligent”.



Relevance Lab offers RLCatalyst – An AIOps platform driven by Intelligent Automation paves way for a faster and seamless Digital Transformation Journey. RLCatalyst Product is focused on driving “Intelligent” AUTOMATION.


AUTOMATION is the core functionality including:
  • DevOps Automation targeting Developer & Operations use cases
  • TechOps Automation targeting IT Support & Operations use cases
  • ServiceOps Automation targeting ServiceDesk & Operations use cases
  • SecOps Automation targeting Security, Compliance & Operations use cases
  • BusinessOps Automation targeting RPA, Applications/Data & Operations use cases)

Driving Automation to be more effective and efficient with “Intelligence” is the key goal and driven by a maturity model.
“Intelligence” based Maturity model for Automation
Level-1: Automation of tasks normally assisting users
Level-2: Integrated Automation focused on Process & Workflows replacing humans
Level-3: Automation leveraging existing Data & Context to drive decisions in more complex processes leveraging Analytics
Level-4: Autonomous & Cognitive techniques using Artificial Intelligence for Automation



RLCatalyst Building Blocks for AIOps

AIOps Platforms need to have common building blocks for “OBSERVE – ENGAGE – ACT” functionality. As enterprises expand their Automation coverage across DevOps, TechOps, ServiceOps, SecurityOps, BusinessOps there is need for all three stages to Observe (with Sensors), Engage (Workflows), Act (Automation & Remediation).


RLCatalyst provides solutions for enterprises to create their version of an Open Architecture based AIOps Platform that can integrate with their existing landscape and provide a roadmap for maturity.


  • RLCatalyst Command Centre “Integrates” with different monitoring solutions to create an Observe capability
  • RLCatalyst ServiceOne “Integrates” with ITSM solutions (ServiceNow and Freshdesk) for the Engage functionality
  • RLCatalyst BOTS Engine “Provides” a mature solution to “Design, Run, Orchestrate & Insights” for Act functionality


For more information feel free to contact marketing@relevancelab.com


0

2023 Blog, Blog, Cloud Blog, Featured, RLCatalyst Blog

The adoption of Cloud and DevOps has brought changes in large enterprises around the traditional management methodology of Infra, Middleware, and Applications lifecycle. There is a continuous “tension” to achieve the right balance of “security + compliance” vs “agility + flexibility” between Operations and Development teams. For large enterprises with multiple business units and global operations and having distributed assets across multiple cloud providers, these issues are more complex. While there is no “silver bullet” that can solve all these issues, every enterprise needs a broad framework for achieving the right balance.

The broad framework is based on the following criteria:

  • IT teams predominantly define the infrastructure components like images, network designs, security policies, compliance guardrails, standard catalogs etc. based on the organization’s policies and requirements.
  • Application teams have the flexibility to order and consume these components and to manage the post-provisioning lifecycle specific to their needs.

The challenge being faced by larger enterprises using multiple cloud workloads is the lack of a common orchestration portal to enable application teams to have self-service requests and flexible workflows for managing workload configuration and application deployment lifecycle. The standard Cloud management portals from the major cloud providers have automated most of their internal provisioning processes, yet don’t provide customers system-specific solutions or do workload placement across various public and private clouds. In order to serve the needs of Application groups, a portal is needed with the following key functionalities.


  • The self-service portal is controlled via role-based access.
  • Standard catalog of items for Infrastructure Management.
  • Flexible workflow for creating a full lifecycle of configurations management.
  • Microservices-based building blocks for consuming “Infrastructure As A Code” and manage post provisioning lifecycle.
  • Ability to monitor the end-to-end provisioning lifecycle with proper error handling and interventions when needed.
  • Governance and management post-provisioning across multiple workloads and cloud services.

Relevance Lab has come up with a microservices-based automation solution which automates enterprise multi-cloud provisioning, pre and post, provisioning workflows, workload management, mandatory policies, configurations, and security controls. The end-to-end provisioning is automated and made seamless to the user by integrating with ServiceNow, Domain servers, configuration servers and various cloud services. There are multiple microservices developed to handle each stage of the automation, making it highly flexible to extend to any cloud resources. The building blocks of the framework are as shown below:



The IAAC, which is maintained in a source code repository can have the cloud templates for a variety of resources.


Resource Platform Automated Process
Compute – VM/Server VMWare, AWS, Azure, GCP Automated provisioning of VMs and the backup VMs
Compute – DB Server VMWare, AWS, Azure, GCP Automated provisioning of the DB servers and Backup servers – Oracle, PostgresSQL, MSSQL, MySQL, SAP
Compute – HA and DR VMWare, AWS, Azure, GCP Automated provisioning of HA and DR servers
Compute – Application Stack AWS, Azure Automated Provisioning of Application stack using CFTs and ARM templates
Network – VPC AWS, Azure, GCP Automated provisioning of VPCs and subnets
Storage AWS, Azure, GCP Automated provisioning of S3 buckets or Blob storage
Storage – Gateways AWS Automated provisioning of storage gateways
DNS Server AWS, Azure Automated provisioning of DNS servers


Getting Started with Hybrid Cloud Automation – Our Recommendations:

  • Generate standard cloud catalogue and create reusable automated workflows for processes such as approval and access control.
  • To optimize the management of resources, limit the number of blueprints. Specific features can be provisioned in a modular fashion in the base image.
  • Use configuration management tools like Chef/Puppet/Ansible to install various management agents.
  • Use “Infrastructure As A Code” principle to provision infrastructure in an agile fashion. It needs tools like GitHub, Jenkins, and any configuration management tool.

Benefits:

  • Significantly reduce the Operations cost by reducing the manual effort and proactive monitoring of services using a single platform.
  • Reduced time to market for new cloud services by enabling a single-click deployment of cloud services.

For more details, please feel free to reach out to marketing@relevancelab.com


0

PREVIOUS POSTSPage 1 of 24NO NEW POSTS