Your address will show here +12 34 56 78
2021 Blog, AppInsights Blog, Blog, Featured

Many AWS customers either integrate ServiceNow into their existing AWS services or set up both ServiceNow and AWS services for simultaneous use. Customers need a near real-time view of their infrastructure and applications spread across their distributed accounts.

Commonly referred to as the “Dynamic Application Configuration Management Database (CMDB) or Dynamic Assets” view, it allows customers to gain integrated visibility into their infrastructures to break down silos and facilitate better decision making. From an end-user perspective as well, there is a need for an “Application Centric” view rather than an “Infrastructure/Assets” view as better visibility ultimately enhances their experience.

An “Application Centric” View provides the following insights.

  • Application master for the enterprise
  • Application linked infrastructure currently deployed and in use
  • Cost allocation at application levels (useful for chargebacks)
  • Current health, issues, and vulnerability with application context for better management
  • Better aligned with existing enterprise context of business units, projects, costs codes for budget planning and tracking

Use Case benefits for ServiceNow customers
Near real-time view of AWS applications & Infrastructure workloads across multiple AWS accounts in ServiceNow. Customer is enabling self-service for their Managed Service Provider (MSP) and their Developers to:

  • Maintain established ITSM policies & processes
  • Enforce Consistency
  • Ensure Compliance
  • Ensure Security
  • Eliminate IAM access to underlying services

Use Case benefits for AWS customers
Enabling application self-service for general & technical Users. The customer would like service owners (e.g. HR, Finance, Security & Facilities) to view AWS infrastructure-enabled applications via self-service while ensuring:

  • Compliance
  • Security
  • Reduce application onboarding time
  • Optical consistency across all businesses

RLCatalyst AppInsights Solution – Built on AppRegistry
Working closely with AWS partnership groups in addressing the key needs of customers, RLCatalyst AppInsights Solution provides a “Dynamic CMDB” solution that is Application Centric with the following highlights:

  • Built on “AWS AppRegistry” and tightly integrated with AWS products
  • Combines information from the following Data Sources:
    • AWS AppRegistry
    • AWS Accounts
      • Design time Data (Definitions – Resources, Templates, Costs, Health, etc.)
      • Run time Data (Dynamic Information – Resources, Templates, Costs, Health, etc.)
    • AppInsights Additional Functionality
      • Service Registry Insights
      • Aggregated Data (Lake) with Dynamic CMDB/Asset View
      • UI Interaction Engine with appropriate backend logic


A well-defined Dynamic Application CMDB is mandatory in cloud infrastructure to track assets effectively and serves as the basis for effective Governance360.

AWS recently released a new feature called AppRegistry to help customers natively build an AWS resources inventory that has insights into uses across applications. AWS Service Catalog AppRegistry allows creating a repository of your applications and associated resources. Customers can define and manage their application metadata. This allows understanding the context of their applications and resources across their environments. These capabilities enable enterprise stakeholders to obtain the information they require for informed strategic and tactical decisions about cloud resources. Using AppRegisty as a base product, we have created a Dynamic Application CMDB solution AppInsights to benefit AWS and ServiceNow customers as explained in the figure below.



Modeling a common customer use case
Most customers have multiple applications deployed in different regions constituting sub-applications, underlying web services, and related infrastructure as explained in the figure below. The dynamic nature of cloud assets and automated provisioning with Infrastructure as a Code makes the discovery process and keeping CMDB up to date a non-trivial problem.



As explained above, a typical customer setup would consist of different business units deploying applications in different market regions across a complex and hybrid infrastructure. Most existing CMDB applications provide a static assets view that is incomplete and not well aligned to growing needs for real-time application-centric analysis, costs allocation, and application health insights. This problem has been solved by the AppInsights solution leveraging existing investments of customers on ITSM licenses of ServiceNow and pre-existing solutions from AWS like ServiceManagement connector that are available for no additional costs. The missing piece till recently was an Application-centric meta data linking applications to infrastructure templates.

Customers need to be able to see the information across their AWS accounts with details of Application, Infrastructure, and Costs in a simple and elegant manner, as shown below. The basic KPIs tracked in the dashboard are following:

  • Dashboard per AWS Account provided (later aggregated information across accounts to be also added)
  • Ability to track an Application View with Active Application Instances, AWS Active Resources and Associated Costs
  • Trend Charts for Application, Infrastructure and Cost Details
  • Drill-down ability to view all applications and associated active instances what are updated dynamically using a period sync option or on-demand use based

The ability to get a Dynamic Application CMDB is possible by leveraging the AWS Well Architected best practices of “Infrastructure as a Code” relying on AWS Service Catalog, AWS Service Management Connector, AWS CloudFormation Templates, AWS Costs & Budgets, AWS AppRegistry. The application is built as a scoped application inside ServiceNow and leverages the standard ITSM licenses making it easy for customers to adopt and share this solution to business users without the need for having AWS Console access.



Workflow steps for adoption of RLCatalyst AppInsights are explained below. The solution provided is based on standard AWS and ServiceNow products commonly use in enterprises and build on existing best practices, processes and collaboration models.


Step-1 Define AppRegistry Data Use AppRegistry
Step-2 Link App to Infra Templates – CloudFormation Template (CFT) / Service Catalog (SC) AWS Accounts Asset Definitions
Step-3 Ensure all Assets Provisioned have App and Service Tagging (Enforce with Guard Rails) AWS Accounts Asset Runtime Data
Step-4 Register Application Services – Service Registry Service Registry
Step-5 AppInsights Data Lake refresh with static and dynamic Updates (Aggregated across accounts) RLCatalyst AppInsights
Step-6 Asset, Cost, Health Dashboard RLCatalyst AppInsights


A typical implementation of RLCatalyst AppInsights can be rolled out for a new customer in 4-6 weeks and can provide significant business benefits for multiple groups enabling better Operations support, Self-service requests, application specific diagnostics, asset usage and cost management. The base solution is built on a flexible architecture allowing for more advanced customization to extend with real time health and vulnerability mappings and achieve AIOps maturity. In future there are plans to extend the Application Centric views to cover more granular “Services” tracking for support of Microservice architectures, container based deployments and integration with other PaaS/SaaS based Service integrations.

Summary
Cloud-based dynamic assets create great flexibility but add complexity for near real-time asset and CMDB tracking. While existing solutions using Discovery tools and Service Management connectors provided a partial solution to an Infrastructure centric view of CMDB, a robust Application Centric Dynamic CMDB was a missing solution that is now addressed with RLCatalyst AppInsights built on AppRegistry as explained in the above blog.

For more information, feel free to contact marketing@relevancelab.com.

References
Governance360 – Are you using your AWS Cloud “The Right Way”
ServiceNow CMDB
Increase application visibility and governance using AWS Service Catalog AppRegistry
AWS Security Governance for Enterprises “The Right Way”
Configuration Management in Cloud Environments



0

2021 Blog, Blog, BOTs Blog, Featured

In large enterprises with complex systems, covering new generation cloud-based platforms while continuing with stable legacy back-end infrastructure usually results in high-friction points at integration layers. These incompatible systems can also slow down enterprise automation efforts to free up humans and have BOTs take over repetitive tasks. Now with RLCatalyst BOTs Server and leveraging common platforms of ServiceNow and UiPath, we provide an intelligent and scalable solution that can also cover legacy infrastructure like AS/400 with terminal interfaces. These applications are commonly found in the supply chain, logistics, warehousing enterprise domains supporting needs for temporary/flexi-staff onboarding & offboarding needs based on the volume of transactions in industries that see spikes in demand across special events powering the need for more automation first solutions.

Integration of a cloud-based ticketing system with a terminal-based system would always require a support engineer, especially with labor-intensive industries. This is true for any legacy system that does not provide an external API for integration. There are diverse issues that occur slowing down business without compromising on security and governance aspects related to such workflows.

With the lack of a stable API system to interface with the AS/400 legacy system, we decided to rely on BOTs simulating the same User behavior as humans dealing with terminal interfaces. RLCatalyst BOTs was extensively used as an IT Automation solutioning platform for ServiceNow, and the same concept was extended to interact with Terminal interfaces commonly used in Robotic Process Automation (RPA) use cases with UiPath. RLCatalyst acts as in “Automation Service Bus” and manages the integration between ServiceNow ITSM platform and UiPath Terminal Interface engine. The solution is extendable and can be used to solve other common problems especially bringing integration between IT and business systems.



Using UiPath to automate processes in legacy systems
Leveraging the capabilities of UiPath to automate terminal-based legacy systems, RLCatalyst interfaces with the service portal to get all the required information to help UiPath’s UiRobot to execute steps defined in the workflow. RLCatalyst’s BOT framework would provide the necessary tools to run/schedule BOT’s with governance, audit trails functionality.

Case Study – Onboarding an AS400 system user
The legacy AS400 system’s user onboarding process used to be multi-staged, with each stage representing a server with its ACL tables. A common profile name would be the link between the servers and, in some cases, independent logins required. A process definition document is the only governing document that helps a CS executive complete the onboarding process.

The design used to automate the process was:

  • Build individual workflows for each stage in the User interaction processes using UiPath.
  • Build an RLCatalyst BOT which:
    • Refers to a template that includes a reference to the stages to run based on the type of user that needs to be onboarded.
    • Based on the template, it would maintain a document of profile names allotted.
    • Validates the profile availability (this helps onboarding outside the automation)
    • Executes the UiPath workflow for each stage in the sequence defined in the template.
    • Once the execution is complete, a summary of the execution with user login details is sent back to the ITSM system.
    • Logs for each stage are maintained for analysis and error corrections.
  • Build the Service Portal approval workflow, which would finally create a task for the automation process for fulfillment.
    • The service portal form captures all the necessary information for onboarding a new user.
    • Based on the account template selected that depicts a work department, a template reference is captured and included in the submitted form.
    • The service portal is used by the SOX compliance team to trace approval and provisioning details.
    • The process trail becomes critical during off-boarding to confirm, access revocation has occurred without delay.

Advantages of Using UI Automation Over Standard API

  • Some of the AS400 servers used for Invoicing/Billing are over 20 years old, and the processes are as old as the servers themselves. The challenge multiplies when the application code is only understood by a small set of IT professionals.
  • UI automation eliminates system testing costs, since it just mimics a user. All user flows would have already been tested.
  • The time taken to build an end-to-end automation would be significantly lesser compared to getting a highly demanding IT professional building the API interface to get it automated.
  • Total automation investment would also significantly be reduced, and ROI’s would be quicker.

Getting Started
Based on our previous experience in integrating and automating processes, our pre-built libraries and BOTs should provide a head start to your automation needs. The framework would ensure that it meets all the necessary security and compliance needs.

For more details, please feel free to reach out to marketing@relevancelab.com.



0

2021 Blog, Blog, Featured

AWS Marketplace is a high-potential delivery mechanism for the delivery of software and professional services. The main benefit for customers is that they get a single bill from AWS for all their Infrastructure and Software consumption. Also, since AWS is already on the approved vendor list for many enterprises, it makes it easier for enterprises to consume software also from the same vendor.

Relevance Lab has always considered AWS Marketplace as one of the important channels for the distribution of its software products. In 2020 we had listed our RLCatalyst 4.3.2 BOTs Server product on the AWS Marketplace as an AMI-based product that a customer could download and run in their AWS account. This year, RLCatalyst Research Gateway was listed on the AWS Marketplace as a Software as a Service (SaaS) product.

This blog details some of the steps that a customer needs to go through to consume this product from the AWS Marketplace.


Step 1: The first step for a customer looking to find the product is to log in to their account and visit the AWS Marketplace. Then search for RLCatalyst Research Gateway. This will show the Research Gateway product at the top of the list in the results. Click on the link and this should lead to the details page.

The product details page lists the important details like.

  • Pricing information
  • Support information
  • Set up instructions

Step-2: The second step for the user is to subscribe to the product by clicking on the “Continue to Subscribe” button. This step will need the user to login into their AWS account (if not done earlier). The page which comes up will show the contract options that the user can choose. RLCatalyst Research Gateway (SaaS) offers three tiers for subscription.

  • Small tier (1-10 users)
  • Medium tier (11-25 users)
  • Large tier (unlimited users)

Also, the customer has the option of choosing a monthly contract or an annual contract. The monthly contract is good for customers who want to try the product or for those customers who would like a budget outflow that is spread over the year rather than a lump sum. The annual contract is good for customers who are already committed to using the product in the long term. An annual contract gets the customer an additional discount over the monthly price.

The customer also has to choose whether they want to contract to renew automatically or not.

One of the great features of AWS Marketplace is that the customer can modify the contract at any time and upgrade to a higher plan (e.g. Small tier to Medium or Large tier). The customer can also modify the contract to opt for auto-renewal at any time.

Step-3: The third step for the user is to click on the “Subscribe” button after choosing their contract options. This leads the user to the registration page where they can set up their RLCatalyst Research Gateway account.



This screen is meant for the Administrator persona to enter the details for the organization. Once the user enters the details, agrees to the End User License Agreement (EULA), and clicks on the Sign-up button, the process for provisioning the account is set in motion. The user should get an acknowledgment email within 12 hours and an email verification email within 24 hours.

Step-4: The user should verify their email account by clicking on the verification link in the email they receive from RLCatalyst Research Gateway.

Step-5: Finally, the user will get a “Welcome” email with the details of their account including the custom URL for logging into his RLCatalyst Research Gateway account. The user is now ready to login into the portal. On logging in to the portal the user will see a Welcome screen.


Step-6: The user can now set up their first Organizational Unit in the RLCatalyst Research Gateway portal by following these steps.

6.1 Navigate to settings from the menu at the top right.


6.2 Click on the “Add New” button to add an AWS account.


6.3 Enter the details of the AWS account.


Note that the account name given in this screen is any name that will help the Administrator to remember which OU and project this account is meant for.

6.4 The Administrator can repeat the procedure to add more than one project (consumption) account.

Step-7: Next the Administrator needs to add Principal Investigator users to the account. For this, he should contact the support team either by email (rlc.support@relevancelab.com) or by visiting the support portal (https://serviceone.relevancelab.com).

Step-8: The final step to set up an OU is to click on the “Add New” button on the Organizations page.


8.1 The Administrator should give a friendly name to the Organization in the “Organization Name” field. Then he should choose all the Accounts that will be consumed by projects in this account. A friendly description should be entered in the “Organization Description” field. Finally, choose a Principal Investigator who will manage/own this Organization Unit. Click “Add Organization” to add this OU.


Summary
As you can see above, ordering RLCatalyst Research Gateway (SaaS) from the AWS Marketplace makes it extremely easy for the user to get started, and end-users can start using the product within no time. Given the SaaS model, the customer does not need to worry about setting up the software in their account. At the same time, using their AWS account for the projects gives them complete transparency into the budget consumption.
In our next blog, we will provide step by step details of adding organizational units, projects & users to complete the next part of setup.

To learn more about AWS Marketplace installation click here.

If you want to learn more about the product or book a live demo, feel free to contact marketing@relevancelab.com.



0

2021 Blog, AWS Service, Blog, Featured

Working on non-scientific tasks such as setting up instances, installing software libraries, making model compile, and preparing input data are some of the biggest pain points for atmospheric scientists or any scientist for that matter. It’s challenging for scientists as it requires them to have strong technical skills deviating them from their core areas of analysis & research data compilation. Further adding to this, some of these tasks require high-performance computation, complicated software, and large data. Lastly, researchers need a real-time view of their actual spending as research projects are often budget-bound. Relevance Lab help researchers “focus on science and not servers” in partnership with AWS leveraging the RLCatalyst Research Gateway (RG) product.

Why RLCatalyst Research Gateway?
Speeding up scientific research using AWS cloud is a growing trend towards achieving “Research as a Service”. However, the adoption of AWS Cloud can be challenging for Researchers with surprises on costs, security, governance, and right architectures. Similarly, Principal Investigators can have a challenging time managing the research program with collaboration, tracking, and control. Research Institutions will like to provide consistent and secure environments, standard approved products, and proper governance controls. The product was created to solve these common needs of Researchers, Principal Investigator and Research Institutions.


  • Available on AWS Marketplace and can be consumed in both SaaS as well as Enterprise mode
  • Provides a Self-Service Cloud Portal with the ability to manage the provisioning lifecycle of common research assets
  • Gives a real time visibility of the spend against the defined project budgets
  • The principal investigator has the ability to pause or stop the project in case the budget is exceeded till the new grant is approved

In this blog, we explain how the product has been used to solve a common research problem of GEOS-Chem used for Earth Sciences. It covers a simple process that starts with access to large data sets on public S3 buckets, creation of an on-demand compute instance with the application loaded, copying the latest data for analysis, running the analysis, storing the output data, analyzing the same using specialized AI/ML tools and then deleting the instances. This is a common scenario faced by researchers daily, and the product demonstrates a simple Self-Service frictionless capability to achieve this with tight controls on cost and compliance.

GEOS-Chem enables simulations of atmospheric composition on local to global scales. It can be used off-line as a 3-D chemical transport model driven by assimilated meteorological observations from the Goddard Earth Observing System (GEOS) of the NASA Global Modeling Assimilation Office (GMAO). The figure below shows the basic construct on GEOS-Chem input and output analysis.



Being a common use case, there is documentation available in the public domain by researchers on how to run GEOS-Chem on AWS Cloud. The product makes the process simpler using a Self-Service Cloud portal. To know more about similar use cases and advanced computing options, refer to AWS HPC for Scientific Research.



Steps for GEOS-Chem Research Workflow on AWS Cloud
Prerequisites for researcher before starting data analysis.

  • A valid AWS account and an access to the RG portal
  • A publicly accessible S3 bucket with large Research Data sets accessible
  • Create an additional EBS volume for your ongoing operational research work. (For occasional usage, it is recommended to upload the snapshot in S3 for better cost management.)
  • A pre-provisioned SageMaker Jupyter notebook to analyze output data

Once done, below are the steps to execute this use case.

  • Login to the RG Portal and select the GEOS-Chem project
  • Launch an EC2 instance with GEOS-Chem AMI
  • Login to EC2 using SSH and configure AWS CLI
  • Connect to a public S3 bucket from AWS CLI to list NASA-NEX data
  • Run the simulation and copy the output data to a local S3 bucket
  • Link the local S3 bucket to AWS SageMaker instance and launch a Jupyter notebook for analysis of the output data
  • Once done, terminate the EC2 instance and check for the cost spent on the use case
  • All costs related to GEOS-Chem project and researcher consumption are tracked automatically

Sample Output Analysis
Once you run the output files on the Jupyter notebook, it does the compilation and provides output data in a visual format, as shown in the sample below. The researcher can then create a snapshot and upload it to S3 and terminate the EC2 instance (without deleting the additional EBS volume created along with EC2).

Output to analyze loss rate and Air mass of Hydroxide pertaining to Atmospheric Science.


Summary
Scientific computing can take advantage of cloud computing to speed up research, scale-up computing needs almost instantaneously, and do all this with much better cost-efficiency. Researchers no longer need to worry about the expertise required to set up the infrastructure in AWS as they can leave this to tools like RLCatalyst Research Gateway, thus compressing the time it takes to complete their research computing tasks.

The steps demonstrated in this blog can be easily replicated for similar other research domains. Also, it can be used to onboard new researchers with pre-built solution stacks provided in an easy to consume option. RLCatalyst Research Gateway is available in SaaS mode from AWS Marketplace and research institutions can continue to use their existing AWS account to configure and enable the solution for more effective Scientific Research governance.

To learn more about GEOS-Chem use cases, click here.

If you want to learn more about the product or book a live demo, feel free to contact marketing@relevancelab.com.

References
Enabling Immediate Access to Earth Science Models through Cloud Computing: Application to the GEOS-Chem Model
Enabling High‐Performance Cloud Computing for Earth Science Modeling on Over a Thousand Cores: Application to the GEOS‐Chem Atmospheric Chemistry Model



0

HPC Blog, 2021 Blog, AWS Service, Blog, Featured

AWS provides a comprehensive, elastic, and scalable cloud infrastructure to run your HPC applications. Working with AWS in exploring HPC for driving Scientific Research, Relevance Lab leveraged their RLCatalyst Research Gateway product to provision an HPC Cluster using AWS Service Catalog with simple steps to launch a new environment for research. This blog captures the steps used to launch a simple HPC 1.0 cluster on AWS and roadmap to extend the functionality to cover more advanced use cases of HPC Parallel Cluster.

AWS delivers an integrated suite of services that provides everything needed to build and manage HPC clusters in the cloud. These clusters are deployed over various industry verticals to run the most compute-intensive workloads. AWS has a wide range of HPC applications spanning from traditional applications such as genomics, computational chemistry, financial risk modeling, computer-aided engineering, weather prediction, and seismic imaging to new applications such as machine learning, deep learning, and autonomous driving. In the US alone, multiple organizations across different specializations are choosing cloud to collaborate for scientific research.


Similar programs exist across different geographies and institutions across EU, Asia, and country-specific programs for Public Sector programs. Our focus is to work with AWS and regional scientific institutions in bringing the power of Supercomputers for day-to-day researchers in a cost-effective manner with proper governance and tracking. Also, with Self-Service models, the shift needs to happen from worrying about computation to focus on Data, workflows, and analytics that requires a new paradigm of considering prospects of serverless scientific computing that we cover in later sections.

Relevance Lab RLCatalyst Research Gateway provides a Self-Service Cloud portal to provision AWS products with a 1-Click model based on AWS Service Catalog. While dealing with more complex AWS Products like HPC there is a need to have a multi-step provisioning model and post provisioning actions that are not always possible using standard AWS APIs. In these situations requiring complex orchestration and post provisioning automation RLCatalyst BOTs provide a flexible and scalable solution to complement based Research Gateway features.

Building blocks of HPC on AWS
AWS offers various services that make it easy to set up an HPC setup.


An HPC solution in AWS uses the following components as building blocks.

  • EC2 instances are used for Master and Worker nodes. The master nodes can use On-Demand instances and the worker nodes can use a combination of On-Demand and Spot Instances.
  • The software for the manager nodes is built as an AMI and used for the creation of Master nodes.
  • The agent software for the managers to communicate with the worker nodes is built into a second AMI that is then used for provisioning the Worker nodes.
  • Data is shared between different nodes using a file-sharing mechanism like FSx Lustre.
  • Long-term storage uses AWS S3.
  • Scaling of nodes is done via Auto-scaling.
  • KMS for encrypting and decrypting the keys.
  • Directory services to create the domain name for using HPC via UI.
  • Lambda function service to create user directory.
  • Elastic Load Balancing is used to distribute incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, Lambda functions, and virtual appliances.
  • Amazon EFS is used for regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs.
  • AWS VPC to launch the EC2 instances in private cloud.

Evolution of HPC on AWS
  • HPC clusters first came into existence in AWS using the CfnCluster Cloud Formation template. It creates a number of Manager and Worker nodes in the cluster based on the input parameters. This product can be made available through AWS Service Catalog and is an item that can be provisioned from the RLCatalyst Research Gateway. The cluster manager software like Slurm, Torque, or SGE is pre-installed on the manager nodes and the agent software is pre-installed on the worker nodes. Also pre-installed is software that can provide a UI (like Nice EngineFrame) for the user to submit jobs to the cluster manager.
  • AWS Parallel Cluster is a newer offering from AWS for provisioning an HPC cluster. This service provides an open-source, CLI-based option for setting up a cluster. It sets up the manager and worker nodes and also installs controlling software that can watch the job queues and trigger scaling requests on the AWS side so that the overall cluster can grow or shrink based on the size of the queue of jobs.

Steps to Launch HPC from RLCatalyst Research Gateway
A standard HPC launch involves the following steps.

  • Provide the input parameters for the cluster. This will include
    • The compute instance size for the master node (vCPUs, RAM, Disk)
    • The compute instance size for the worker nodes (vCPUs, RAM, Disk)
    • The minimum and maximum number of worker nodes.
    • Select the workload manager software (Slurm, Torque, SGE)
    • Connectivity options (SSH keys etc.)
  • Launch the product.
  • Once the product is in Active state, connect to the URL in the Output parameters on the Product Details page. This connects you to the UI from where you can submit jobs to the cluster.
  • You can SSH into the master nodes using the key pair selected in the Input form.

RLCatalyst Research Gateway uses the CfnCluster method to create an HPC cluster. This allows the HPC cluster to be created just like any other products in our Research Gateway catalog items. Though this provisioning may take upto 45 minutes to complete, it creates an URL in the outputs which we can use to submit the jobs through the URL.

Advanced Use Cases for HPC

  • Computational Fluid Dynamics
  • Risk Management & Portfolio Optimization
  • Autonomous Vehicles – Driving Simulation
  • Research and Technical Computing on AWS
  • Cromwell on AWS
  • Genomics on AWS

We have specifically looked at the use case that pertains to BioInformatics where a lot of the research uses Cromwell server to process workflows defined using the WDL language. The Cromwell server acts as a manager that controls the worker nodes, which execute the tasks in the workflow. A typical Cromwell setup in AWS can use AWS Batch as the backend to scale the cluster up and down and execute containerized tasks on EC2 instances (on-demand or spot).



Prospect of Serverless Scientific Computing and HPC
“Function As A Service” Paradigm for HPC and Workflows for Scientific Research with the advent of serverless computing and its availability on all major computing platforms, it is now possible to take the computing that would be done on a High Performance Cluster and run it as lambda functions. The obvious advantage to this model is that this virtual cluster is highly elastic, and charged only for the exact execution time of each lambda function executed.

One of the limitations of this model currently is that only a few run-times are currently supported like Node.js and Python while a lot of the scientific computing code might be using additional run-times like C, C++, Java etc. However, this is fast changing and cloud providers are introducing new run-times like Go and Rust.


Summary
Scientific computing can take advantage of cloud computing to speed up research, scale-up computing needs almost instantaneously and do all this with much better cost efficiency. Researchers no longer worry about the expertise required to set up the infrastructure in AWS as they can leave this to tools like RLCatalyst Research Gateway, thus compressing the time it takes to complete their research computing tasks.

To learn more about this solution or participate in using the same for your internal needs feel free to contact marketing@relevancelab.com

References
Getting started with HPC on AWS
HPC on AWS Whitepaper
AWS HPC Workshops
Genomics in the Cloud
Serverless Supercomputing: High Performance Function as a Service for Science
FaaSter, Better, Cheaper: The Prospect of Serverless Scientific Computing and HPC



0

thank you, 2021 Blog, AWS Governance, Governance360, Blog, Featured

Compliance on the Cloud is an important aspect in today’s world of remote working. As enterprises accelerate the adoption of cloud to drive frictionless business, there can be surprises on security, governance and cost without a proper framework. Relevance Lab (RL) helps enterprises speed up workload migration to the cloud with the assurance of Security, Governance and Cost Management using an integrated solution built on AWS standard products and open source framework. The key building blocks of this solution are.


Why do enterprises need Compliance as a Code?
For most enterprises, the major challenge is around governance and compliance and lack of visibility into their Cloud Infrastructure. They spend enormous time on trying to achieve compliance in a silo manner. Enterprises also spend enormous amounts of time on security and compliance with thousands of man hours. This can be addressed by automating compliance monitoring, increasing visibility across cloud with the right set of tools and frameworks. Relevance Labs Compliance as a Code framework, addresses the need of enterprises on the automation of these security & compliance. By a combination of preventive, detective and responsive controls, we help enterprises, by enforcing nearly continuous compliance and auto-remediation and there-by increase the overall security and reduce the compliance cost.

Key tools and framework of Cloud Governance 360°
AWS Control Tower: AWS Control Tower (CT) helps Organizations set up, manage, monitor, and govern a secured multi-account using AWS best practices. Setting up a Control Tower on a new account is relatively simpler when compared to setting it up on an existing account. Once Control Tower is set up, the landing zone should have the following.


  • 2 Organizational Units
  • 3 accounts, a master account and isolated accounts for log archive and security audit
  • 20 preventive guardrails to enforce policies
  • 2 detective guardrails to detect config violations

Apart from this, you can customize the guard rails and implement them using AWS Config Rules. For more details on Control Tower implementation, refer to our earlier blog here.

Cloud Custodian: Cloud Custodian is a tool that unifies the dozens of tools and scripts most organizations use for managing their public cloud accounts into one open-source tool. It uses a stateless rules engine for policy definition and enforcement, with metrics, structured outputs and detailed reporting for Cloud Infrastructure. It integrates tightly with serverless runtimes to provide real time remediation/response with low operational overhead.

Organizations can use Custodian to manage their cloud environments by ensuring compliance to security policies, tag policies, garbage collection of unused resources, and cost management from a single tool. Custodian adheres to a Compliance as Code principle, to help you validate, dry run, and review changes to your policies. The policies are expressed in YAML and include the following.

  • The type of resource to run the policy against
  • Filters to narrow down the set of resources

Cloud Custodian is a rules engine for managing public cloud accounts and resources. It allows users to define policies to enable a well managed Cloud Infrastructure, that’s both secure and cost optimized. It consolidates many of the ad hoc scripts organizations have into a lightweight and flexible tool, with unified metrics and reporting.



Security Hub: AWS Security Hub gives you a comprehensive view of your security alerts and security posture across your AWS accounts. It’s a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Firewall Manager, as well as from AWS Partner solutions like Cloud Custodian. You can also take action on these security findings by investigating them in Amazon Detective or by using Amazon CloudWatch Event rules to send the findings to an ITSM, chat, Security Information and Event Management (SIEM), Security Orchestration Automation and Response (SOAR), and incident management tools or to custom remediation playbooks.




Below is the snapshot of features across AWS Control Tower, Cloud Custodian and Security Hub, as shown in the table, these solutions complement each other across the common compliance needs.


SI No AWS Control Tower Cloud Custodian Security Hub
1 Easy to implement or configure AWS Control Tower within few clicks Light weight and flexible framework (Open source) which helps to deploy the cloud policies Gives a comprehensive view of security alerts and security posture across AWS accounts
2 It helps to achieve “Governance at Scale” – Account Management, Security, Compliance Automation, Budget and Cost Management Helps to achieve Real-time Compliance and Cost Management It’s a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services
3 Predefined Guardrails based on best practices – Establish / Enable Guardrails We need to define the rules and Cloud Custodian will enforce them Continuously monitors the account using automated security checks based on AWS best practices
4 Guardrails are enabled at Organization level If an account has any specific requirement to either include or exclude certain policies, those exemptions can be handled With a few clicks in the AWS Security Hub console, we can connect multiple AWS accounts and consolidate findings across those accounts
5 Automate Compliant Account Provisioning Can be included in Account creation workflow to deploy the set of policies to every AWS account as part of the bootstrapping process Automate continuous, account and resource-level configuration and security checks using industry standards and best practices
6 Separate Account for Centralized logging of all activities across accounts Offers comprehensive logs whenever the policy is executed and can be stored to S3 bucket Create and customize your own insights, tailored to your specific security and compliance needs
7 Separate Account for Audit. Designed to provide security and compliance teams read and write access to all accounts Can be integrated with AWS Config, AWS Security Hub, AWS System Manager and AWS X-Ray Support Diverse ecosystem of partner integrations
8 Single pane view dashboard to get visibility on all OU’S, accounts and guardrails Needs Integration with Security Hub to view all the policies which have been implemented in regions / across accounts Monitor your security posture and quickly identify security issues and trends across AWS accounts in Security Hub’s summary dashboard


Relevance Lab Compliance as a Code Framework
Relevance Lab’s Compliance as a Code framework is an integrated model between AWS Control Tower (CT), Cloud Custodian and AWS Security Hub. As shown below, CT helps organizations with pre-defined multi-account governance based on the best practices of AWS. The account provision is standardized across your hundreds and thousands of accounts within the organization. By enabling Config rules, you can bring in the additional compliance checks to manage your security, cost and account management. To implement events and action based policies, Cloud Custodian is implemented as a complementary solution to the AWS CT which helps to monitor, notify and take remediation actions based on the events. As these policies run in AWS Lambda, Cloud Custodian enforces Compliance-As-Code and auto-remediation, enabling organizations to simultaneously accelerate towards security and compliance. The real-time visibility into who made what changes from where, enables us to detect human errors and non-compliance. Also take suitable remediations based on this. This helps in operational efficiency and brings in cost optimization.

For eg: Custodian can identify all the non tagged EC2 instances or EBS volumes that are not mounted to an EC2 instance and notify the account admin that the same would be terminated in next 48 to 72 hours in case of no action. Having a Custom insight dashboard on Security Hub helps admin monitor the non-compliances and integrate it with an ITSM to create tickets and assign it to resolver groups. RL has implemented the Compliance as a Code for its own SaaS production platform called RLCatalyst Research Gateway, a custom cloud portal for researchers.



Common Use Cases


How to get started
Relevance Lab is a consulting partner of AWS and helps organizations achieve Compliance as a Code, using the best practices of AWS. While enterprises can try and build some of these solutions, it is a time consuming activity and error prone and needs a specialist partner. RL has helped 10+ enterprises on this need and has a reusable framework to meet the security and compliance needs. To start with Customers can enroll for a 10-10 program which gives an insight of their current cloud compliance. Based on an assessment, Relevance Lab will share the gap analysis report and help design the appropriate “to-be” model. Our Cloud governance professional services group also provides implementation and support services with agility and cost effectiveness.

For more details, please feel free to reach out to marketing@relevancelab.com



0

2021 Blog, AWS Platform, Blog, Featured

The year 2020 undoubtedly brought in unprecedented challenges with COVID-19 that required countries, governments, business and people respond in a proactive manner with new normal approaches. Certain businesses managed to respond to the dynamic macro environment with a lot more agility, flexibility and scale. Cloud computing had a dominant effect in helping businesses stay connected and deliver critical solutions. Relevance Lab scaled up its partnership with AWS to align with the new focus areas resulting in a significant increase in coverage of products, specialized solutions and customer impacts in the past 12 months compared to what has been achieved in the past.

AWS launched a number of new global initiatives in 2020 and in response to the business challenges resulting from the COVID-19 pandemic. The following picture describes those initiatives in a nutshell.


Relevance Lab’s Focus areas have been very well aligned
Given the macro environment challenges, Relevance Lab quickly aligned their focus on helping customers deal with the emerging challenges and those areas were very complementary to AWS’ global initiatives and responses listed above.

Relevance Lab’s aligned its AWS solutions & services across four dominant themes and by leveraging RL’s own IP based platforms and deep cloud competencies. The following picture highlights those key initiatives and the native AWS products and services leveraged in the process.


Relevance Lab also made major strides in improving Initiative to progress along AWS Partnership Maturity Levels for Specialized Consulting and Key Product offerings. The following picture highlights the major achievements in 2020 in a nutshell.



AWS Specializations & Spotlights
Relevance Lab is a specialist cloud managed services company with deep expertise in Devops, Service Automation and ITSM integrations. It also has an RLCatalyst Platform built to support Automation across AWS Cloud and ITSM platforms. RLCatalyst family of solutions help in Self-service Cloud portals, IT service monitoring and automation through BOTs. While maintaining multi-sector client base, we are also uniquely focused with existing solutions for Higher Education, Public Sector and Research sector clients.


Spotlight-1: AWS Cloud Portals Solution
Relevance Lab have developed a family of cloud portal solutions on top of our RLCatalyst platform. Cloud Portal solutions aim to simplify AWS consumption using self-service models with emphasis on 1-click provisioning / lifecycle management, dashboard views of budget / cost consumption and modeling personas, roles & responsibilities with a sector context.


A unique feature of the above solutions is that they promote a hybrid model of consumption wherein users can bring their own AWS accounts (consumption accounts) under the framework of our cloud portal solution and benefit from being able to consume AWS for their educational and research needs in an easy self-service model.

The solutions can be consumed as either an Enterprise or as a SaaS license. In addition, the solutions will be made available on AWS Marketplace soon.


Spotlight-2: AWS Security Governance at Scale Framework
The framework and deployment architecture uses AWS Control Tower as the foundational service and other closely aligned and native AWS products and services such as AWS Service Catalog, AWS Security Hub, AWS Budgets, etc. and addresses subject areas such as multi-account management, cost management, security, compliance & governance.

Relevance Lab can assess, design and deploy or migrate to a fully secure AWS environment that lends itself to governance at scale. To encourage clients to adopt this journey, we have launched a 10-10 Program for AWS Security Governance that provides clients with an upfront blueprint of the entire migration or deployment process and end-state architecture so that they can make an informed decision


Spotlight-3: Automated User Onboarding/Offboarding for Enterprises Use Case
Relevance Lab is a unique partner that possesses deep expertise on AWS and ITSM platforms such as ServiceNow, freshservice, JIRA Service Desk, etc. The intersection of these platforms lends itself to relevant use cases for the industry. Relevance Lab has come up with a solution for automated User onboarding & offboarding in an enterprise context. This solution brings together multiple systems in a workflow model to accomplish user onboarding and offboarding tasks for an enterprise. It includes integration across HR systems, AWS Service Catalog and other services, ITSM platforms such as ServiceNow and assisted by Relevance Lab’s RLCatalyst BOTs engine to perform an end-to-end user onboarding/offboarding orchestration in an unassisted manner.


Key Customer Use Cases and Success Stories in 2020
Relevance Lab helped customers across verticals with growing momentum on Cloud adoption in the post COVID-19 situation to rewrite their digital solutions in adopting the new touchless interactions, remote distributed workforces and strong security governance solutions for enabling frictionless business.


AWS Best Practices – Blogs, Campaigns, Technical Write-ups
The following is a collection of knowledge articles and best practices published throughout the year related to our AWS centered solutions & services.

AWS Service Management


AWS Practice Solution Campaigns

AWS Governance at Scale

AWS Workspaces

RLCatalyst Product Details

AWS Infrastructure Automation

AWS E-Learning & Research Workbench Solutions

Cloud Networking Best Practices


Summary
The momentum of Cloud adoption in 2020 is quite likely to continue and grow in the new year 2021. Relevance Lab is a trusted partner in your cloud adoption journey driven by focus on following key specializations:

  • Cloud First Approach for Workload Planning
  • Cloud Governance 360 for using AWS Cloud the Right-Way
  • Automation Led Service Delivery Management with Self Service ITSM and Cloud Portals
  • Driving Frictionless Business and AI-Driven outcomes for Digital Transformation and App Modernization

To learn more about our services and solutions listed above or engage us in consultative discussions for your AWS and other IT service needs, feel free to contact us at marketing@relevancelab.com



0

2021 Blog, Blog, Command blog, Featured

AWS X-Ray is an application performance service that collects data about requests that your application processes, and provides tools to view, filter, and gain insights into that data to identify issues and opportunities for optimization. It enables a developer to create a service map that displays an application’s architecture. For any traced request to your application, you can see detailed information not only about the request and response, but also about calls that your application makes to downstream AWS resources, microservices, databases and HTTP web APIs. It is compatible with microservices and serverless based applications.

The X-Ray SDK provides

  • Interceptors to add to your code to trace incoming HTTP requests
  • Client handlers to instrument AWS SDK clients that your application uses to call other AWS services
  • An HTTP client to use to instrument calls to other internal and external HTTP web services

The SDK also supports instrumenting calls to SQL databases, automatic AWS SDK client instrumentation, and other features.

Instead of sending trace data directly to X-Ray, the SDK sends JSON segment documents to a daemon process listening for UDP traffic. The X-Ray daemon buffers segments in a queue and uploads them to X-Ray in batches. The daemon is available for Linux, Windows, and macOS, and is included on AWS Elastic Beanstalk and AWS Lambda platforms.

X-Ray uses trace data from the AWS resources that power your cloud applications to generate a detailed service graph. The service graph shows the client, your front-end service, and corresponding backend services to process requests and persist data. Use the service graph to identify bottlenecks, latency spikes, and other issues to solve to improve the performance of your applications.

AWS X-Ray Analytics helps you quickly and easily understand

  • Any latency degradation or increase in error or fault rates
  • The latency experienced by customers in the 50th, 90th, and 95th percentiles
  • The root cause of the issue at hand
  • End users who are impacted, and by how much
  • Comparisons of trends based on different criteria. For example, you could understand if new deployments caused a regression

How AWS X-Ray Works
AWS X-Ray receives data from services as segments. X-Ray groups the segments that have a Common request into traces. X-Ray processes the traces to generate a service map which provides a visual depiction of the application

AWS X-Ray features

  • Simple setup
  • End-to-end tracing
  • AWS Service and Database Integrations
  • Support for Multiple Languages
  • Request Sampling
  • Service map

Benefits of Using AWS X-Ray

Review Request Behaviour
AWS X-Ray traces customers’ requests and accumulates the information generated by the individual resources and services, which makes up your application, granting you an end-to-end view on the actions and performance of your application.

Discover Application Issues
Having AWS X-Ray, you could extract insights about your application performance and finding out root causes. As AWS X-Ray is having tracing features, you can easily follow request paths to diagnose where in your application and what is creating performance issues.

Improve Application Performance
AWS X-Ray’s service maps allow you to see connection between resources and services in your application in actual time. You could simply notice where high latencies are visualizing node, occurring and edge latency distribution for services, and after that, drilling down into the different services and paths having impact on the application performance.

Ready to use with AWS
AWS X-Ray operates with Amazon EC2 Container Service, Amazon EC2, AWS Elastic Beanstalk, and AWS Lambda. You could utilize AWS X-Ray with applications composed in Node.js, Java, and .NET, which are used on these services.

Designed for a Variety of Applications
AWS X-Ray operates for both simple and complicated applications, either in production or in development. With X-Ray, you can simply trace down the requests which are made to the applications that span various AWS Regions, AWS accounts, and Availability Zones.

Why AWS X-Ray?
Developers spend a lot of time searching through application logs, service logs, metrics, and traces to understand performance bottlenecks and to pinpoint their root causes. Correlating this information to identify its impact on end users comes with its own challenges of mining the data and performing analysis. This adds to the triaging time when using a distributed microservices architecture, where the call passes through several microservices. To address these challenges, AWS launched AWS X-Ray Analytics.

X-Ray helps you analyze and debug distributed applications, such as those built using a microservices architecture. Using X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root causes of performance issues and errors. It helps you debug and triage distributed applications wherever those applications are running, whether the architecture is serverless, containers, Amazon EC2, on-premises, or a mixture of all of these.

Relevance Lab is a specialist AWS partner and can help Organizations in implementing the monitoring and observability framework including AWS X-ray to ease the application management and help identify bugs pertaining to complex distributed workflows.

For a demo of the same, please click here

For more details, please feel free to reach out to marketing@relevancelab.com



0

2020 Blog, AWS Governance, Blog, Featured

With the growing need for cloud adoption from various enterprises, there is a need to move end-user computing workload and traditional data center capacity to the cloud. Relevance Lab is working with AWS partner groups to simplify the cloud adoption process and bring in best practices for the entire lifecycle of Plan-Build-Run on the cloud. Following is the suggested blueprint for cloud adoption and moving new workload on to the cloud.


  • CloudEndure to enable automated Cloud Migration
  • AWS Control Tower is used to set up and govern a new, secure multi-account AWS environment
  • AWS Security, Identity and Compliance
  • AWS Service Management Connector for ServiceNow with Service Catalog management
  • AWS Systems Manager for Operational Insights
  • RLCatalyst Intelligent Automation

As part of our own organization’s experience to adopt AWS for both our workspace and server needs, we have followed the following process to cater to needs, of multiple organization roles.

Since we already had an AWS Master account but did not use AWS Control tower initially, the steps followed were as follows.


  • Setup & launch AWS Control Tower in our Master Account and build multiple Custom OUs (Organizational Units) & corresponding accounts using account factory
  • Use CloudEndure to migrate existing workloads to the new organizations under Control Tower
  • For two different organizational units, there is a need to publish separate service catalogs and access to the catalogs controlled by User Roles defined in AD integrated with ServiceNow. Based on the setup only approved users can order items relevant to their needs
  • Used AWS Service Management Connector to publish the catalogs and integrate with AWS resources
  • Implementation of RLCatalyst BOTs Automation for 1-Click provisioning
  • Different guardrails for workload being provisioned for AWS Workspaces and AWS Server Assets based on organization needs
  • Management of AWS server assets by AWS Systems Manager
  • Mature ITSM processes based on ServiceNow
  • Proactive monitoring of workspaces and servers for any incidents using RLCatalyst Command Centre

Based on our internal experience in adopting full-lifecycle of Plan-Build-Run use cases, it is evident that multiple solutions from AWS integrated with ServiceNow and automated with RLCatalyst product provides a reusable blueprint for intelligent and automated cloud adoption. Answering the following quick questions can get your Cloud adoption jumpstarted.


  • List down your desktop assets and server assets to be migrated to the cloud with an underlying OS, third party software and applications
  • Designing your AWS Landing zone with security considerations between public- facing and private facing assets
  • Designing your networking elements between your organization’s business unit segmentation of assets and different environments needed for development, testing and production
  • List down your cloud cost segmentation and governance needs based on which a multi-organization setup can be designed upfront, and granular asset tags may be implemented
  • Capacity planning and use of Reserved Instances for Cost optimization
  • User Management and Identity management needs with possible integration to existing Microsoft AD infrastructure (On-Cloud or On-Prem) and Single Sign-On
  • Capture the needs from the IT department to provide the organization with Self-Service Portals to be able to Order Assets and Services in a frictionless manner with automated fulfilment using BOTs
  • The use of Systems Manager, Runbook design & automation and Command Center are used to proactively monitor any critical assets and applications to manage incidents efficiently
  • Ability to provision and deprovision assets on-demand with automated templates
  • Automation of User Onboarding and Off-boarding
  • ITSM Service management with Change, Configuration management database, Asset Tracking and SecOps
  • Disaster Recovery strategy and internal assessments for readiness
  • Cloud Security, Vulnerability testing, Ongoing patch management lifecycle and GRC
  • DevOps adoption for higher velocity of achieving Continuous Integration and Continuous Deliveries

Most organizations moving to cloud is a competency discovery process which lacks best practices and a maturity model. A better approach is to use a solid framework of technology, people and processes to make your cloud adoption frictionless. Relevance Lab with its pre-built solution in partnership with AWS and ServiceNow can help enterprises adopt cloud faster.

For more details, please feel free to reach out to marketing@relevancelab.com


0

2020 Blog, Blog, Digital Blog, Featured
Working with a large enterprise customer supporting B2B and B2C business we leveraged Shopify to launch fully functional e-commerce stores enabling new digital channels in a very short window. Post Covid-19 pandemic disrupting existing business and customer reach, large companies had to quickly realign their digital channels and supply chains to deal with disruption and changes in the consumer behaviour. Businesses needed to have a frictionless approach to enable new digital channels, markets and products to reach out in a touchless manner while rewiring their backend fulfilment systems to deal with the supply chain disruptions. Relevance Lab worked closely with our customers during these challenging times to bring in necessary changes of empowering e-commerce, enterprise integrations, and supply chain insights helping create and maintain business continuity with a new environment.

The existing customer had invested in a full fledged but heavy e-commerce platform that was slow and costly to change. With Shopify we quickly enabled them to achieve setting up a fully functional e-commerce store in Canada with standard integrations with region specific context and positive revenue impacts.

It all boiled down to identifying an e-commerce platform which

  • Is easy and fast to set-up
  • Is secure and scalable
  • Incur least total cost of ownership
  • Provides the convenience to shop on multiple devices
  • Customizable as per requirement

We have configured the Shopify built-in theme to meet branding requirements and purchase workflows. Payment was enabled through multiple channels, including credit card, PayPal, and GPay. The store was also multilingual supporting two languages x`– English and French. We were able to go live in just four weeks and provide complete functionalities covering over 500 products delivered with a very cost optimized Shopify monthly subscription plan.

In parallel to building the storefront, the operations team simultaneously enabled

  • Adding new products to the online store
  • Configuring tax/discounts
  • Configuring customer support
  • Validating standard reports such as sales reports etc

The merchant had a complicated tax calculation GST, PST, QST across 13 regions which were simplified by the out of the box country-specific tax configuration in Shopify.

Feature Configuration and Customization Details

  • Customization of Shopify theme to make the store stand out and look great on web and mobile
  • Extended store functionalities such as translation, user review, product quick view and product pre-order using apps from Shopify Marketplace
  • Shopify’s own payment provider to accept credit card payments
  • Blog publishing through Shopify native blog features to help customers make informed decisions
  • Enabled multiple languages from Shopify admin and created separate URLs for translated content
  • Shopify Fulfilment Network offered a dedicated network of fulfilment centers that ensure timely deliveries, lower shipping costs, and a positive customer experience
  • Shipping suite provides tools to calculate real-time shipping rates, purchase and print shipping labels, and track shipments
  • Using Shopify built in tax engine to automatically handle most common sales tax calculations
  • Shopify native Notifications Module to automatically sent email or SMS to customers for confirmation of their order and shipping updates
  • With minimal effort we have configured Shopify Email to create email marketing campaigns and send them from Shopify
  • Over 500 products were imported in a matter of minutes using the product Import feature. More advanced features including associating multiple product images to product and meta data were out of the box
  • Advanced store navigation was configured using collections and tags which helped customers to easily discover products of their choice
  • Shopify’s analytics and reports provide means to review store’s recent activity, get insight into visitors, analyze online store speed, and analyze store’s transactions

Solution Architecture
Key components of Shopify platform are

  • Partner Dashboard: This provides capabilities including API credentials, track metrics for your published apps, create development stores, and access resources that help you to build your business
  • Shopify App CLI: Bootstrap a working Shopify app with Shopify command-line tool
  • Shopify App Generator for Rails: A Rails engine for building Shopify apps
  • App Bridge: JavaScript library to embed your app seamlessly in the Shopify admin
  • Shopify Admin API Library for Ruby: A handy software to simplify making Admin API calls in Ruby apps
  • Shopify Admin API Library for Python: A Python library to simplify making Admin API calls in Python apps
  • Shopify Admin API GraphiQL explorer: Interactive tool to build GraphQL queries using real Shopify API resources
  • Shopify Storefront API GraphiQL explorer: Interactive tool to build GraphQL queries for Shopify’s Storefront API
  • JavaScript Buy SDK: Add Shopify features to any website
  • Android Buy SDK: Add Shopify features to Android apps
  • iOS Buy SDK: Add Shopify features to iOS apps
  • Polaris: Create great user experiences for your apps with Shopify’s design system and component library

Leveraging the above standard Shopify components, the solution was delivered with following storefront architecture.



Relevance Lab Differentiator
Relevance Lab empowers Digital Solutions covering e-commerce, Content, CRM and E-Business. Within e-commerce platforms there are deep specializations on Salesforce Commerce Cloud, Adobe Experience Manager, Shopify.

With a complementary expertise in Cloud Infrastructure, Business Analytics and ERP Integration we help our customers achieve the necessary flexibility, scalability and cost optimization to adopt Cloud platforms covering SAAS, PAAS and IAAS. Based on the context of the business challenge, we provide an end to end perspective in identifying areas of friction and leveraging technology to address the same. In this case there was a quick recovery from Covid-19 induced disruptions and a solution was delivered at a fraction of regular costs with quick ROI achieved. The collaborative approach to deeply understanding customer business problems, ability to consult on multiple solutions and bring in deep expertise to enable the outcome is part of Relevance Lab unique capabilities.


For more details on how we have help achieve frictionless digital business and leverage Cloud based platforms like Shopify for e-commerce feel free to contact marketing@relevancelab.com



0

PREVIOUS POSTSPage 6 of 12NEXT POSTS