Relevance Lab launches its professional services for Service Workbench on AWS (SWB) available for customers through AWS Marketplace. SWB is a cloud-based open-source solution that caters the needs of the scientific research community by empowering both researchers & research IT teams.
Relevance Lab is a preferred partner for SWB to help customers adopt this open-source solution seamlessly. We have deep expertise and can help in assessment, planning, deployment, training, customization and ongoing managed services support in a cost effective manner.
Highlights of Professional Services Offering
Service Workbench on AWS which is an the open-source solution is fully supported with deep competence to help Plan-Build-Run lifecycle
Provide assessment, planning, deployment, training, customization and ongoing managed services support
Offer cost-effective and flexible engagement models
With Relevance Lab’s professional services for SWB, IT teams are able to deliver secure, repeatable, and federated access control to data, tooling, and compute power to researchers driving a frictionless scientific research on cloud.
Assessment, Implementation and Training for new and existing setup
Advanced Setup & Premium Support including underlying infrastructure with special needs on Security, Compliance, Data Protection and Scalability
Ongoing Managed Services & Support including Upgrades, Monitoring and Incident Management
SWB Code and new feature customization, enhancement services for custom catalog like RStudio on ALB
What it Means for Scientific Researcher Community?
Relevance Lab’s Professional Services Offering for Service Workbench on AWS is a solution that enables IT teams to provide secure, repeatable, and federated control of access to data, tooling, and compute power that researchers need. With Service Workbench, researchers no longer have to worry about navigating cloud infrastructure. They can focus on achieving research missions and completing essential work in minutes, not months, in configured research environments.
Question-2 What is a typical customer end-to-end journey?
Answer: Most customers look for the following support for the adoption lifecycle.
One time on-boarding
Product customization services
On-Going managed services and support
T&M services for anything additional
Question-3 How long does onboarding take, and what does it cost?
Answer: A standard onboarding for a new customer takes about 2 weeks covering initial assessment, installation, configurations, training, and basic functionality demonstration for a new setup. It costs about US $10,000.
Question-4 What sort of support is available post onboarding?
Answer: Following are the common support activities requested:
L0 – Monitoring and Diagnostics
L1 – Technical Queries on how to use the product effectively
L3 – Customization, enhancements (typically for less than 40-hour changes per request)
Project Engagement – for typically 40+ hours of enhancements/customization work
Question-5 What is the engagement model for ongoing support or customizations?
Answer: Two models of support are offered – Basic and Premium. In case of customizations, both models of project-based and Time & Material engagement are possible.
SWB is available as an open-source solution and provides useful functionality to enable self-service portal for research customers. However, without a dedicated partner to support through the complete lifecycle, it can be a daunting exercise for customers and overheads for their internal IT teams. Based on the feedback from early adopters and in partnership with AWS, we are happy to launch specialized professional services on AWS Marketplace to make adoption by customers frictionless. Keeping the open source nature in mind, the services are optimized to be cost-effective and flexible with a goal to make scientific research in the cloud faster, cheaper and better.
The rapid advancement of cloud computing has brought in new possibilities for public institutions and private enterprises. With near-infinite resources and scalability, ease of setup, provisioning with IAAS, PAAS, and SAAS models, and pay-as-you-go features, the cloud has opened up opportunities and frontiers that were simply not possible in the yesteryears.
Scientific research is one such discipline. Sectors that depend on research for their relevance and impact have been able to make rapid advances using cloud computing. Scientific research requires vast compute, storage resources for ingesting and analyzing very large data sets, and specialized tools and services for data analysis and visualization. Healthcare and life sciences are one such sector where research happens on a mega-scale across public and private institutions funded by various governments and private sector entities.
Typical Characteristics of Research Ecosystem
There are some defining characteristics in a research ecosystem, which lends themselves naturally to cloud-based computing environments
Research happens with a community of researchers working towards common objectives and goals
Budgets get defined based on proposals and typically need to be allocated to research teams and tracked closely for consumption
Ability to set up and operate secure and trusted environments for sensitive data and compute engines with specialized pipelines, analysis, and visualization tools
Ability to share research outcomes with sponsors and fellow researchers in a simple manner
Requirement to put massive amounts of storage, compute and specialized and advanced computing, analysis and visualization at fingertips and make it insanely simple to access for scientific researchers who are not necessarily IT savvy to be able to do create and operate complex IT infrastructure
Cloud Portals for Scientific Research
The above characteristics and requirements have opened up a new class of solutions called Cloud Self Service Portals, which typically provide a curated set of tools and datasets for researchers to access the ability to track their budget consumption, simple one-click provisioning, and managing the life cycle of cloud resources necessary for scientific research and ability to share research outcomes with fellow researchers.
RLCatalyst Research Gateway and Use Cases
Relevance Lab, with its Cloud Self Service Portal, RLCatalyst Research Gateway, has been leading the way in providing highly simplified access to a curated set of cloud services for the Scientific Research community.
Nextflow and Nextflow Tower
Genomic pipeline processing using Nextflow and Nextflow Tower (open source) solution and using High Performance Computing in AWS Cloud (AWS Batch, Parallel Cluster, Illumina Dragen/NVidia Parabriks Pipeline processing) – easy deployment and cost tracking per researcher per pipeline
Secure and Scalable RStudio on AWS
RStudio solution on AWS Cloud with an ability to connect securely (using SSL) without having to worry about managing custom certificates and their lifecycle
EC2 based researcher tools
Enable researchers with EC2 Linux and Windows servers to install their specific research tools and software. Ability to add AMI based researcher tools (both private and from AWS Marketplace) with 1-click on MyResearchCloud
SageMaker AI/ML Workbench drive Data research (like COVID-19 impact analysis) with available public data sets already on AWS cloud and create study-specific data sets
Bring your own license with proper Cost and Budget control with Self-service models
Enable a small group of Principal Investigator and researchers to manage Research Grant programs with tight budget control, self-service provisioning, and research data sharing
Cloud Service Portals are a class of solutions, which hide the complexity of setting up and operating cloud environments for resource-intensive activities such as scientific research and provide researchers with more time to focus on the science. Relevance Lab, with its solution RLCatalyst Research Gateway, has been a key enabler, having solved several use cases (mentioned above in this blog) related to scientific research for genomics, life sciences, and pharma sectors.
To know more about how RLCatalyst Research Gateway can meet your scientific research requirements, please contact us at firstname.lastname@example.org.
Developed in the Data Sciences Platform at the Broad Institute, the Genome Analysis Toolkit (GATK) offers a wide variety of tools with a primary focus on variant discovery and genotyping. Relevance Lab is pleased to offer researchers the ability to run their GATK pipelines on AWS that was missing so far with our Genomics Cloud solution and a 1-click model.
GATK is making scientific research simpler for Genomics by providing best practices workflows and docker containers. The workflows are written in Workflow Description Language (WDL), a user-friendly scripting language maintained by the OpenWDL community. Cromwell is an open-source workflow execution engine that supports WDL as well as CWL, the Common Workflow Language, and can be run on a variety of different platforms, both local and cloud-based. RLCatalyst Research Gateway added support for the Cromwell engine that enables researchers to run any popular workflows on AWS seamlessly. Some of the popular workflows that are available for a quick start are the following:
The figure below shows the building block of this solution on AWS Cloud.
Steps for running GATK with WDL and Cromwell on AWS Cloud
Log into RLCatalyst Research Gateway as a Principal Investigator or Researcher profile. Select the project for running Genomics Pipelines, and first time create a new Cromwell Advanced Product.
Select the Input Data location, output data location, pipeline to run (from GATK), and provide parameters (input.json). Default parameters are already suggested for the use of AWS Batch with Spot instances and all other AWS complexities, abstracted from the end-user, for simplicity.
5 min to provision new Cromwell Server on AWS with AWS Batch setup completed with 1-Click
Execute Pipeline (using UI interface or by SSH into Head-node) on Cromwell Server. There is ability to run the new pipelines, monitor status, and review outputs from within the Portal UI.
Pipelines can take some time to run depending on size of data and complexity
View outputs of the Pipeline in Outputs S3 bucket from within the Portal. Use specialized tools like MultiQC, Integrative Genomics Viewer (IGV), and RStudio for further analysis.
All costs related to User, Product, and Pipelines are automatically tagged and can be viewed in the budgets screen to know the cloud spend for pipeline execution that consists of all resources, including AWS Batch HPC instances dynamically provisioned. Once the pipelines are executed, the existing Cromwell Server can be stopped or terminated to reduce ongoing costs.
The figure below shows the ability to select Cromwell Advanced to provision and run any pipeline.
The following picture shows the architecture of Cromwell on AWS.
GATK community is constantly striving to make Genomics Research in the cloud simpler. So far, the support for AWS Cloud was still missing and was a key ask from multiple online research communities. Relevance Lab, in partnership with AWS, has addressed this need with their Genomics Cloud solution to make scientific research frictionless.
The pandemic worldwide has highlighted the need for advancing human health faster and new drugs discovery advancement for precision medicines leveraging Genomics. We are building a Genomics Cloud on AWS leveraging convergence of Big Compute, Large Data Sets, AI/ML Analytics engines, and high-performance workflows to make drug discovery more efficient, combining cloud & open source with our products.
Relevance Lab (RL) has been collaborating with AWS Partnership teams over the last one year to create Genomics Cloud. This is one of the dominant use cases for scientific research in the cloud, driven by healthcare and life sciences groups exploring ways to make Genomics analysis better, faster, and cheaper so that researchers can focus on science and not complex infrastructure.
RL offers a product RLCatalyst Research Gateway that facilitates Scientific Research with easier access to big compute infrastructure, large data sets, powerful analytics tools, a secure research environment, and the ability to drive self-service research with tight cost and budget controls.
The top use cases for AWS Genomics in the Cloud are implemented by this product and provide an out-of-the-box solution, significantly saving cost and effort for customers.
Key Building Blocks for Genomics Cloud Architecture
The solution for supporting easy use of Genomics Cloud supports the following key components to meet the need of researchers, scientists, developers, and analysts to efficiently run their experiments without the need for deep expertise in the backend computing capabilities.
Genomics Pipeline Processing Engine
The researchers’ community uses popular open-source tools like NextFlow and Cromwell for large data sets by leveraging HPC systems, and the orchestration layer is managed by tools like Nextflow and Cromwell.
Nextflow is a bioinformatics workflow manager that enables the development of portable and reproducible workflows. It supports deploying workflows on a variety of execution platforms, including local, HPC schedulers, AWS Batch, Google Cloud Life Sciences, and Kubernetes.
Cromwell is a workflow execution engine that simplifies the orchestration of computing tasks needed for Genomics analysis. Cromwell enables Genomics researchers, scientists, developers, and analysts to efficiently run their experiments without the need for deep expertise in the backend computing capabilities.
Many organizations also use commercial tools like Illumina DRAGEN and NVidia Parabricks for similar solutions that are more optimized in reducing processing timelines but also come with a price.
Open Source Repositories for Common Genomics Workflows
The solution needs to allow researchers to leverage work done by different communities and tools to reuse existing available workflows and containers easily. Researchers can leverage any of the existing pipelines & containers or can also create their own implementations by leveraging existing standards.
GATK4 is a Genome Analysis Toolkit for Variant Discovery in High-Throughput Sequencing Data. Developed in the Data Sciences Platform at the Broad Institute, the toolkit offers a wide variety of tools with a primary focus on variant discovery and genotyping. Its powerful processing engine and high-performance computing features make it capable of taking on projects of any size.
BioContainers – A community-driven project to create and manage bioinformatics software containers.
Large Data Sets Storage and Access to Open Data Sets
AWS cloud is leveraged to deal with the needs of large data sets for storage, processing, and analytics using the following key products.
Amazon S3 for high-throughput data ingestion, cost-effective storage options, secure access, and efficient searching
AWS DataSync for secure, online service that automates and accelerates moving data between on premises and AWS storage services
AWS Open Datasets Program houses openly available, with 40+ open Life Sciences data repositories
Outputs Analysis and Monitoring Tools
One of the key building blocks for Genomic Data Analysis needs access to common tools like the following integrated into the solution.
MultiQC reports MultiQC searches a given directory for analysis logs and compiles an HTML report. It’s a general-use tool, perfect for summarising the output from numerous bioinformatics tools.
IGV (Integrative Genomics Viewer) is a high-performance, easy-to-use, interactive tool for the visual exploration of genomic data.
RStudio for Genomics since R is one of the most widely-used and powerful programming languages in bioinformatics. R especially shines where a variety of statistical tools are required (e.g., RNA-Seq, population Genomics, etc.) and in the generation of publication-quality graphs and figures.
Genomics Data Lake AWS Data Lake for creating Genomics data lake for tertiary processing. Once the Secondary analysis generates outputs typically in Variant Calling Format (VCF) for further analysis, there is a need to move such data into a Genomics Data Lake for tertiary processing. Leveraging standard AWS tools and solution framework, a Genomics Data Lake is implemented and integrated with the end-to-end sequencing processing pipeline.
Variant Calling Format
specification is used in bioinformatics for storing gene sequence variations, typically in a compressed text file. According to the VCF specification, a VCF file has meta-information lines, a header line, and data lines. Compressed VCF files are indexed for fast data retrieval (random access) of variants from a range of positions.
VCF files, though popular in bioinformatics, are a mixed file type that includes a metadata header and a more structured table-like body. Converting VCF files into the Parquet format works excellently in distributed contexts like a Data Lake.
Cost Analysis of Workflows
One of the biggest concerns for users of Genomic Cloud is control on budget and cost that is provided by RLCatalyst Research Gateway by tracking spends across Projects, Researchers, Workflow runs at a granular level and allowing for optimizing spends by using techniques like Spot instances and on-demand compute. There are guardrails built-in for appropriate controls and corrective actions. Users can run sequencing workflows using their own AWS Accounts, allowing for transparent control and visibility.
To make large-scale genomic processing in the cloud easier for institutions, principal investigators, and researchers, we provide the fundamental building blocks for Genomics Cloud. The integrated product covers large data sets access, support for popular pipeline engines, access to open source pipelines & containers, AWS HPC environments, analytics tools, and cost tracking that takes away the pains of managing infrastructure, data, security, and costs to enable researchers to focus on science.
Relevance Lab (RL) is a specialist company in helping customers adopt cloud “The Right Way” by focusing on an “Automation-First” and DevOps strategy. It covers the full lifecycle of migration, governance, security, monitoring, ITSM integration, app modernization, and DevOps maturity. Leveraging a combination of services and products for cloud adoption, we help customers on a “Plan-Build-Run” transformation that drives greater velocity of product innovation, global deployment scale, and cost optimization for new generation technology (SaaS) and enterprise companies.
In this blog, we will cover some common themes that we have been using to help our customers for cloud adoption as part of their maturity journey.
SaaS with multi-tenant architecture
Multi-Account Cloud Management for AWS
Microservices architecture with Docker and Kubernetes (AWS EKS)
Jenkins for CI/CD pipelines and focus on cloud agnostic tools
AWS Control Tower for Cloud Management & Governance solution (policy, security & governance)
DevOps maturity models
Cost optimization, agility, and automation needs
Standardization for M&A (Merger & Acquisitions) integrations and scale with multiple cloud provider management
Spectrum of AWS governance for optimum utilization, robust security, and reduction of budget
Automation/BOT landscape, how different strategies are appropriate at different levels, and the industry best practice adoption for the same
Reference enterprise strategy for structuring DevOps for engineering environment which has cloud native development and the products which are SaaS-based.
Relevance Lab Cloud and DevOps Credentials at a Glance
RL has been a cloud, DevOps, and automation specialist since inception in 2011 (10+ years)
Need for a Comprehensive Approach to Cloud Adoption
Most enterprises today have their applications in the cloud or are aggressively migrating new ones for achieving the digital transformation of their business. However, the approach requires customers to think about the “Day-After” Cloud in order to avoid surprises on costs, security, and additional operations complexities. Having the right Cloud Management not only helps eliminate unwanted costs and compliance, but it also helps in optimal use of resources, ensuring “The Right Way” to use the cloud. Our “Automation- First Approach” helps minimize the manual intervention thereby, reducing manual prone errors and costs.
RL’s matured DevOps framework helps in ensuring the application development is done with accuracy, agility, and scale. Finally, to ensure this whole framework of Cloud Management, Automation and DevOps are continued in a seamless manner, you would need the right AIOps-driven Service Delivery Model.
Hence, for any matured organizations, the below 4 themes become the foundation for using Cloud Management, Automation, DevOps, and AIOps.
RL offers a unique methodology covering Plan-Build-Run lifecycle for Cloud Management, as explained in the diagram below.
Following are the basic steps for Cloud Management:
Built on best practices offered from native cloud providers and popular solution frameworks, RL methodology leverages the following for Cloud Management:
AWS Well-Architected Framework
AWS Management & Governance Lens
AWS Control Tower for large scale multi-account management
AWS Service Catalog for template-driven organization standard product deployments
Terraform for Infra as Code automation
AWS CloudFormation Templates
AWS Security Hub
The basic Cloud Management best practices are augmented with unique products & frameworks built by RL based on our 50+ successful customer implementations covering the following:
Quickstart automation templates
AppInsights and ServiceOne – built on ITSM
RLCatalyst cloud portals – built on Service Catalog
Governance360 – built on Control Tower
RLCatalyst BOTS Automation Server
Instill ongoing maturity and optimization using the following themes:
Four level compliance maturity model
Key Organization metrics across assets, cost, health, governance, and compliance
Industry-proven methodologies like HIPAA, SOC2, GDPR, NIST, etc.
For Cloud Management and Governance, RL has Solutions like Governance360, AWS Management and Governance lens, Cloud Migration using CloudEndure. Similarly, we have methodologies like “The Right Way” to use the cloud, and finally Product & Platform offerings like RLCatalyst AppInsights.
RL promotes an “Automation-First” approach for cloud adoption, covering all stages of the Plan-Build-Run lifecycle. We offer a mature automation framework called RLCatalyst BOTs and self-service cloud portals that allow full lifecycle automation.
In terms of deciding how to get started with automation, we help with an initial assessment model on “What Can Be Automated” (WCBA) that analyses the existing setup of cloud assets, applications portfolio, IT service management tickets (previous 12 months), and Governance/Security/Compliance models.
For the Automation theme, RL has Solutions like Automation Factory, University in a Box, Scientific Research on Cloud, 100+ BOTs library, custom solutions on Service WorkBench for AWS. Similarly, we have methodologies like Automation-First Approach, and finally Product & Platform offerings like RL BOTs automation Engine, Research Gateway, ServiceNow BOTs Connector, UiPath BOTs connector for RPA.
The following blogs explain in more detail our offerings on automation.
DevOps and Microservices
DevOps and microservices with containers are a key part of all modern architecture for scalability, re-use, and cost-effectiveness. RL, as a DevOps specialist, has been working on re-architecting applications and cloud migration across different segments covering education, pharma & life sciences, insurance, and ISVs. The adoption of containers is a key building block for driving faster product deliveries leveraging Continuous Integration and Continuous Delivery (CI/CD) models. Some of the key considerations followed by our teams cover the following for CI/CD with Containers and Kubernetes:
Environment dependent attributes for better configuration management
Order of execution and well-defined structure
Repeatable and re-usable resources and components
Self contained artifacts for easy portability
The following diagram shows a standard blueprint we follow for DevOps:
For the DevOps & Microservices theme, RL has Solutions like CI/CD Cockpit solution, Cloud orchestration Portal, ServiceNow/AWS/Azure DevOps, AWS/Azure EKS. Similarly, we have methodologies like WOW DevOps, DevOps-driven Engineering, DevOps-driven Operations, and finally Product & Platform offerings like RL BOTs Connector.
AIOps and Service Delivery
RL brings in unique strengths across AIOps with IT Service Delivery Management on platforms like ServiceNow, Jira ServiceDesk and FreshService. By leveraging a platform-based approach that combines intelligent monitoring, service delivery management, and automation, we offer a mature architecture for achieving AIOps in a prescriptive manner with a combination of technology, tools, and methodologies. Customers have been able to deploy our AIOps solutions in 3 months and benefit from achieving 70% automation of inbound requests, reduction of noise on proactive monitoring by 80%, 3x faster fulfillment of Tickets & SLAs with a shift to a proactive DevOps-led organization structure.
RL offers a combination of Solutions, Methodologies, and Product & Platform offerings covering the 360 spectrum of an enterprise Cloud & DevOps adoption across 4 different tracks covering Cloud Management, Automation, DevOps, and AIOps.
The benefits of a technology-driven approach that leverages an “Automation-First” model has helped our customer reduce their IT spends by 30% over a period of 3 years with 3x faster product deliveries and real-time security & compliance.
To know more about our Cloud Centre of Excellence and how we can help you adopt Cloud “The Right Way” with best practices leveraging Cloud Management, Automation, DevOps, and AIOps, feel free to write to email@example.com
Software architecture provides a high-level overview of what a software system looks like. At the very minimum, it shows the various logical pieces of the overall solution and the interaction between those pieces. (See C4 Model for architecture diagramming). The software architecture is like a map of the terrain for anybody who must deal with the system. Contrary to what many might think, software architecture is important even for non-engineering functions like sales, as many customers like to review the architecture to see how well it fits within their enterprise and whether it could introduce future issues by its adoption.
Goals of the Architecture
It is important to determine the goals for the system when deciding on the architecture. This should include both short-term and long-term goals.
Some of our important goals for RLCatalyst Research Gateway are: 1. Ease of Use
The basic question in our mind is always “How would customers like to use this system?”. Our product is targeted to researchers and academics who want to use the scalability and elasticity of the AWS cloud for ad-hoc and high-performance computing needs. These users are not experts at using the AWS console. So, we made things extremely simple for the user. Researchers can order products with a single click, and the portal sets up their resources without the user needing to understand any of the underlying complexities. Users can also interact with the products through the portal, eliminating the need to set up anything outside the portal (though they always have that option).
We also kept in mind the administrators of the system for whom this might just be one amongst many others that they must manage. Thus, we made it easy for the administrator to add AWS accounts, create Organizational Units, and integrated Identity Providers. Our goals were: administrators to get the system up and running in less than 30 minutes.
2. Scalability, performance, and reliability
We followed the best practices recommended by AWS, and where possible, used standardized architecture models so that users would find it easy as well as familiar. For example, we deploy our system into a VPC with public and private subnets. The subnets are spread across multiple Availability Zones to guard against the possibility of one availability zone going down. The computing instances are deployed in the private subnet to prevent unauthorized access. We also use auto-scaling groups for the system to be able to pull in additional compute instances when the load is higher.
3. What is the time to market?
One of our main goals was to be able to bring the product to market quickly and put it in front of the customers to gain early and valuable feedback. Developing the product as a partner of AWS was a great help since we were able to use many AWS services for some of the common application needs without spending time in developing our own components for well known use-cases. For example, RLCatalyst Research Gateway does its user management via AWS Cognito, which provides the facility to create users, roles, and groups as well as the ability to interface with other Identity Provider systems.
Similarly, we use AWS DocumentDB (with MongoDB API compatibility) as our database. This allows developers to use a local MongoDB instance, while QA and production systems use AWS DocumentDB with high availability of multi-AZ clusters, automated backups via AWS Backup and Snapshots.
4. Cost efficiency
This is one of the key concerns for every administrator. RLCatalyst Research Gateway uses a scalable architecture that not only lets the system scale up when the load is high but also scales down when the load is less to optimize on the cost. We use EKS clusters to deploy our solution and AWS DocumentDB clusters. This allows us to choose the size and instance type according to the cost considerations.
We have also brought in features like the automatic shutdown of resources so that idle compute instances, which are not running any jobs, can shut down after a 15-minute idle time. Additionally, even resources like ALBs are de-provisioned when the last compute instance behind them is de-provisioned.
We provide a robust cost governance dashboard, allowing users insights into their usage and budget consumption.
Our target customers are in the research and scientific computing area, where data security is a key concern. We are frequently asked, “Will the system be secure? Can it help me meet regulatory requirements and compliances?”. RLCatalyst Research Gateway architecture is developed with security in mind at each level. The use of SSL certificates, encryption of data at rest, and the ability to initiate action at a distance are some of the architecture considerations.
Map of AWS Services
Amazon EC2, Auto-scaling
Provides easily managed compute resources without need to manage hardware. Integrates well with Infrastructure as Code (IaC)
Amazon Virtual Private Cloud (VPC)
Provides isolation of resources, easy management of traffic, isolation of traffic.
Provides an easy way to provide a single end-point which can route traffic to multiple target groups. Integrates with AWS Certificate manager to provide SSL support.
AWS CostExplorer, AWS Budgets
Cost and Governance
Provides fine-grained cost and usage data. Notifications when budget thresholds are reached.
AWS Service Catalog
Catalog of approved IT Services on AWS
Provides control on what resources can be used in an AWS account.
AWS WAF (Web Application Firewall)
Helps manage malicious traffic
DNS (Domain Name System) Services
Provides hosted zones and API access to manage the same.
CDN (Content Delivery Network)
Caches content closest to end-users to reduce latency and improve customer experience.
Authentication and authorization
AWS Identity and Access Management (IAM)
Provides support for granular control based on policies and roles.
MongoDB compatible API
Validation of the Solution
It is always good to validate your solution with an external review from the experts. AWS offers such an opportunity to all its partners by way of the AWS Foundational Technical Review. The review is valid for two years and is free of cost to partners. Looking at our design through the FTR Lens enabled us to see where our design could get better in terms of using the best practices (especially in the areas of security and cost-efficiency). Once these changes were implemented, we earned the “Reviewed by AWS” badge.
Relevance Lab developed the RLCatalyst Research Gateway in close partnership with AWS. One of the excellent tools available from AWS for any software architecture team is the AWS Well-Architected Framework with its five pillars of Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Efficiency. Working within this framework greatly facilitates the development of a robust architecture that serves not only current but also future goals.
Relevance Lab has been an AWS partner for almost a decade now. The primary transition in 2021 was moving from a pure consulting partner to a niche technology partner of AWS based on the strengths of two new ISV Product launches with RLCatalyst Research Gateway and RLCatalyst AppInsights.
RLCatalyst AppInsights is built on AWS Service Catalog AppRegistry and helps achieve an “Application-Centric” view for cloud assets, costs, health, and security to achieve Governance360
Customers have been demanding a “Solutions” approach from their partners that combine the strength of Products (own + third party) and Services to provide a unique business solution that removes friction and helps deliver key value. This is only possible by unifying the strength of Products + Services to create platform-based offerings delivered with a unique playbook for driving digital transformation.
The top-5 trends we observed in last one year regarding customer needs for cloud adoption are the following:
Cloud Adoption Acceleration
“Cloud Only” adoption to accelerate momentum of transitioning all internal systems, applications, and services to IaaS, PaaS, and SaaS solutions with an automation-first approach
DevOps Automation Led Operations
Critical focus on AIOps to ensure digital business operations are proactively managed with best practices on operations with Site Reliability Engineering (SRE) and DevOps, leveraging ServiceNow platform
Frictionless Digital Workflows and Business Interactions
End-to-end business process integration with applications across self-developed products, PaaS platforms, and third-party SaaS solutions covering Shopify, Adobe Experience Manager, Demandware, Oracle Fusion, SOA/API Gateways, etc.
Cloud Data Lakes and Actionable Intelligence
Focus on agile business analytics with use of cloud-based data platforms leveraging Snowflake, Databricks, Azure Data Factory, AWS Data Lakes, etc., and integration with AI/ML tools with Sagemaker and RStudio
Security, Compliance, and Cost Management with focus on Governance360
Critical focus on security, governance, and cost optimization with a proactive model driven by a strong automation foundation
In this blog, we will primarily cover the strategic AWS partnership achievements of our products and solutions leveraged to help our customers use cloud “The Right Way”. The business benefits of this automation and platform led approach helped some of the key customers achieve significant outcomes, as explained below:
Speeded up product delivery cycles by 3x leveraging Agile + DevOps approach for Product Engineering and Application Migrations
Cut down cloud cost spending by 30% with better capacity utilization and effective cloud costs tracking at a granular level of business units, applications, customer usage patterns, and transaction costs optimization
Leveraging Automated Service Management achieved 70% handling of inbound tickets by smart BOTs using our product and RPA tools creating an Automation Factory
Proactive security and vulnerability management reducing the cost of compliance and reduced outages by 30%
Focus on effective data management and analytics with more real-time insights to business transactions and actionable intelligence leading to savings in excess of $300K annually for large supply chain use cases
Leveraging AWS cloud is a foundation enabler for all Relevance Lab products and solutions. The diagram below shows a high level overview of our AWS Ecosystem coverage.
The journey snapshot of the last 12 months is captured in the diagram below.
Relevance Lab and AWS Journey Highlights
To recap our key progress for this year, we are presenting a quick brief of the last 12 months in reverse chronological order.
Solid partnership with AWS APJ teams for go-to-market in the region for scientific research with RLCatalyst Research Gateway. There is a strong endorsement from AWS business teams and Solution Architects on RL solutions being a relevant offering for regional needs
Launch of Cloud Academy to train a new batch of people based on a platform-led model for the ability to rapidly create a large and competent workforce for cloud opportunities
CoE (Center of Excellence) teams pursuing new use cases High-Performance Computing (large and growing ecosystem) and AppStream-based training labs for education customers.
AppInsights product on ServiceNow emerging as a brand new product conceptualized and launched in 2021 with joint efforts with AWS Control Services group
Relevance Lab’s focus on addressing the Digital Transformation jigsaw puzzle with RLCatalyst and SPECTRA Platforms
ServiceOne and RLCatalyst Intelligent Automation Reference Architecture
SPECTRA Reference Architecture for agile analytics applications
Relevance Lab Hyperautomation approach to business optimization
Relevance Lab Service Maturity Model
Taking AWS cloud & ServiceNow solutions to multiple new prospects interested in understanding our offering across cloud management, automation, DevOps, and AIOps managed services
Showcasing RLCatalyst Research Gateway solutions to multiple public sector institutions, non-profit research centers, and health care providers
Key tracks of RL Cloud CoE covering Cloud Management, Automation, DevOps and AIOps shared
Summary of 10 year journey for RL Company and Product lifecycle shared
Launch of MyResearchCloud, An easy way to enable small and mid-sized customers to use the RLCatalyst Research Gateway SaaS product using “Bring Your Own Account”
RLCatalyst AppInsights launched on ServiceNow Store
RLCatalyst Platform, Solutions and Products consolidated offering for automation published
Automation-First approach for Plan-Build-Run of cloud adoption detailed
Maturity model for BOTs design published
RLCatalyst Genomics Pipeline work with Nextflow started
Joint efforts for the co-development on the open-source solutions for scientific research in cloud that emphasize on Health Informatics and Genomic processing space using RStudio
RLCatalyst Research Gateway solution has been reviewed and approved for ISV Path – a special program exclusively meant for the Independent Software Vendor (ISV) capabilities
Listing of Relevance Lab products and professional services on AWS Marketplace, which includes RLCatalyst Research Gateway SaaS and Governance360 solution built on AWS Control Tower
Selection of the AppInsights ServiceNow solution as the partner-built solution for AWS AppRegistry
Partnership with a specialist HIPAA governance solution provider for integrations into our Governance360 solution
Collaborating with the AWS recent solution announcement teams driving AWS Management and Governance Lens (part of AWS Well-Architected Framework prescribed offering)
RLCatalyst Research Gateway “test drive” by the first prospect with useful inputs to make the onboarding process much simpler and frictionless. Expectation to go from “No-Cloud” to “Full-cloud” experience for scientific researchers in less than 15 min (Uber-style)
Relevance Lab enters the elite AWS Service Delivery Program for niche partners for AWS Service Catalog
Relevance Lab SmartView, built on AWS AppRegistry new concept for dynamic application CMDB, getting significant appreciation and visibility from AWS Management and Governance teams
Ongoing co-development and collaboration with AWS Service Workbench groups to scale up RStudio on AWS Cloud with shared AWS ALB (Application Load Balancer) architecture
RLCatalyst Research Gateway common use cases implementation
“Automation-First” model for Cloud adoption elaborated
Common use cases for Cloud migration with focus on Application Migration
Original concept of SmartView Solution (later renamed AppInsights) for Application CMDB created.
“Automation-First” approach to use AWS cloud “The Right Way” detailed
Common use cases for scientific research published
AWS ISV Partner Path program adoption initiated
Research@Scale Architecture Blueprint created for an integrated offering combining strengths of Relevance Lab product, solutions and services
Conceptualizing Governance360 Solution built with AWS Control Tower customization framework
Started evaluation of AWS Service Workbench with BioInformatics Blueprint, RStudio, Sagemaker
ServiceOne Transition Blueprint created
Relevance Lab RLCatalyst Research Gateway product positioning in market with focus on “Blue Ocean Strategy” shared to create a niche offering
RLCatalyst Research Gateway launched as a SaaS product on AWS Marketplace
ServiceOne team worked on the Compliance as a Code Framework involving AWS Control Tower
RLCatalyst – Our Platform through 2021
ServiceOne: Our Cloud CoE Stories in 2021
Partnership Journey through Blogs, Webinar & Videos in 2021
It was a busy year at Relevance Lab. We published a number of blogs covering our solutions powered by our partnership with AWS. The following is a collection of blogs published on our website throughout the year:
In addition, we successfully conducted a webinar with AWS and Dash Solutions. You can watch the recording here and download the presentation pdf here.
With the start of the new year 2022, we are very bullish about leveraging our AWS cloud Products and Solutions that help in driving Frictionless Business for our customers. There are 100000+ AWS Partners in the ecosystem worldwide, but Relevance Lab has created a unique differentiator and positioning leveraging the power of our IP Products as a key technology provider to complement our deep services competencies that are leading to tremendous momentum on new customer solutions.
Customers are continuing to face challenges with their business and supply chains in the pandemic era, and new business models are emerging that demand a new level of Agility + Automation. At Relevance Lab, we are constantly enhancing our offerings to help our customers navigate the Digital Transformation Puzzle and provide a unique value proposition with our global workforce across regions, critical investments in our IP platforms, and constant efforts on building deep competencies across cloud, data, and digital platforms.
We do not collect and sell your personal information.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.