Careers
Life @Relevance Lab
Experience Transformation at Relevance Lab - Enjoy Work like never before!! Relevance Lab offers a fast paced career with an opportunity to be at the forefront of leading edge technologies in Cloud, DevOps, User Experience Designs, Big Data etc. If “Passion to Achieve” is what drives you then Relevance Lab is the place to be.
Join Us
Supply Chain SME/Project Manager
Work Location: Hybrid (Bangalore)
Experience: 8 - 10 years
Required Skills: SAP Implementation, Supply Chain Management, SME, Project Manager, ERP Implementation, SAP MM, SAP PP, MRP, Capacity Planning
Primary Requirement:
Responsibilities:
- Lead execution of the assigned projects & responsible for end-to-end execution.
- Lead, guide and support the design and implementation of targeted strategies including identification of change impacts to people, process, policy, and structure, stakeholder identification and alignment, appropriate communication and feedback loops, success measures, training, organizational readiness, and long-term sustainability.
- Manage the day-to-day activities, including scope, financials (e.g., business case, budget), resourcing (e.g., Full-time employees, roles and responsibilities, utilization), timelines and toll gates and risks.
- Implement project review and quality assurance to ensure successful execution of goals and stakeholder satisfaction.
- Consistently report and review progress to the Program Lead, Steering group, and relevant stakeholders.
- Will involve in more than one projects or will work across a portfolio of projects.
- Identify improvement and efficiency opportunities across the projects.
- Analyse data, evaluate results, and develop recommendations and road maps across multiple workstreams.
- Partner with the Program Lead in setting up & implementing the projects for Digital Finance Initiative.
- Build and maintain effective partnerships with key cross functional leaders and project team members across functions such as Finance & Technology
- Knowledge of functional supply chain and planning processes, including ERP/MRP, capacity planning, and managing planning activities with contract manufacturers
- Experience in implementing ERP systems such as SAP and Oracle
- Experience in systems integration and ETL tools such as Informatica and Talend a plus.
- Experience with data mapping and systems integration a plus.
- Functional knowledge of supply chain or after sales service operations a plus.
- Outstanding drive, excellent interpersonal skills and the ability to communicate effectively, both verbally and in writing, and to immediately contribute in a team environment.
- An ability to prioritize and perform well in a fast-paced environment, while maintaining a high level of client focus.
- Demonstrable track record of delivery and impact in managing/delivering transformation, with minimum 6-9 years’ experience in project management & business transformation.
- Experience in managing Technology Projects (data analysis, visualization, app development etc) along with at least in one function such as Procurement Domain, process improvement, continuous improvement, change management, operating model design.
- Has performed the role of a scrum master or managed a project having scrum teams.
- Has managed projects with stakeholders in multi-location landscape.
- Experience in managing analytics projects will be a huge plus.
- Understanding & application of Agile and waterfall methodology
- Exposure to tools and applications such as Microsoft Project, Jira, Confluence, PowerBI, Alteryx
- Understanding of Lean Six Sigma
- Preferably a post graduate - MBA though not mandatory
- Excellent interpersonal (communication and presentation) and organizational skills Problem solving abilities and a can-do attitude.
- Confident, proactive self-starters, comfortable in managing and engaging others
- Effective in engaging, partnering with, and influencing stakeholders across the matrix up to VP level.
- Ability to move fluidly between big picture and detail always keeping the end goal in mind.
- Inclination toward collaborative partnership, and able to help establish/be part of high performing teams for impact.
- Highly diligent with close eye for detail. Delivers quality output.
Oracle Fusion Integration Developer
Work Location: Work from Home
Experience: 8 - 10 years
Required Skills: Oracle fusion Integration, OIC, OTBI, Oracle SOA Suite, Web services, API, XML, XSLT, MuleSoft
Responsibilities:
- The Oracle Fusion Financials Integration Developer is a professional responsible for designing, developing, and implementing integrations between Oracle Fusion Financials and other systems, e.g., Azure Synapse Pool
- This includes working with business stakeholders to understand the requirements, designing, and developing the integrations, and testing and deploying the integrations.
- Design and develop integrations between Oracle Fusion Financials and other systems using Oracle Integration Cloud (OIC) or similar middleware platforms.
- Collaborate with Product Owners and end-users to gather integration requirements and understand financial processes that need to be integrated.
- Create technical specifications and design documentation for integration solutions.
- Configure and customize standard integration connectors and develop custom adapters as needed.
- Ensure data integrity and accuracy by implementing data validation and reconciliation procedures.
- Perform unit testing, system testing, and assist in user acceptance testing (UAT) to validate integration functionality.
- Troubleshoot and resolve integration issues and performance bottlenecks.
- Monitor integration processes and address any errors or exceptions promptly.
- Stay up to date with Oracle Fusion Financials updates and new features that may impact integrations
- Collaborate with cross-functional teams, including functional analysts, developers, and database administrators, to support end-to-end integration processes.
- Provide technical support and training to end-users on integration-related topics.
- Document integration processes, configurations, and troubleshooting guides for future reference.
- Contribute to continuous improvement efforts to enhance integration efficiency and effectiveness.
- Other duties as assigned.
Qualifications:
- This role requires a combination of technical expertise, financial domain knowledge, and integration experience to ensure seamless data flow and process automation.
- Bachelor's degree in computer science, Information Technology, Finance, or a related field
- 5+ years of experience as an Oracle Fusion Financials Integration Developer or similar role
- Strong understanding of Oracle Fusion Financials modules, e.g., AP, Expense, GL, FA, Purchasing, etc
- Hands-on experience with Oracle Integration Cloud (OIC), OTBI or Oracle SOA Suite for integration development
- Strong Knowledge of web services, APIs, and middleware technologies
- Proficiency in XML, XSLT, SFTP, SOAP, REST, and other integration technologies such as MuleSoft
- Experience with Oracle Cloud Implementation and Oracle Cloud Support
- Expert level experience in writing complex SQL queries, SQL tuning, and database concepts.
- Ability to analyze complex integration requirements and propose effective solutions.
- Excellent problem-solving and debugging skills.
- Strong communication and collaboration skills.
Senior Power BI Developer
Work Location: Work from Home/Bangalore
Experience: 5 - 8 years
Required Skills: Power BI, DAX, T-SQL, SSRS, Data Analyst, SQL
Responsibilities:
- Using Power BI, create advanced business dashboards and interactive visual reports.
- Define key performance indicators (KPIs) with specific objectives and track them regularly.
- Analyze data and display it in reports to aid decision-making.
- Convert business needs into technical specifications and establish a timetable for job completion.
- Create, test, and deploy Power BI scripts, as well as execute efficient deep analysis.
- Use Power BI to run DAX queries and functions.
- Ability to work on large data sets and implemented Incremental Refreshes
- Power Bi Gateway tuning & setup.
- Create charts and data documentation with explanations of algorithms, parameters, models, and relationships.
- Construct a data warehouse.
- Use SQL queries, Oracle procedures, Materialized views to get the best results.
- Make technological adjustments to current BI systems to improve their performance.
- For a better understanding of the data, use filters and visualizations.
- Analyze current ETL procedures to define and create new systems.
Power BI Requirements & Skills:
- Strong Power BI Development & Architecture Expertise.
- Prior experience in data-related tasks
- Mastery in data analytics.
- Should be proficient in software development.
- Should have worked with multiple data sources and large data sets that get updated real-time.
- Familiar with Oracle Server, queries, Procedures, Materialized view
- Be familiar with MS SQL Server BI Stack tools and technologies, such as SSRS and TSQL, Power Query, MDX, PowerBI, and DAX.
- Analytical thinking for converting data into relevant reports and graphics.
- Capable of enabling row-level data security.
- Knowledge of Power BI application security layer models.
- Ability to run DAX queries on Power BI desktop.
- Proficient in doing advanced-level computations on the data sets.
Ansible & Terraforms Expert
Work Location: Remote
Experience: 5 to 10 Years
Required Skills: Ansible, Terraforms, AWS, Devops, Shell Scripting
Work Timings: Flexible to work in USA Shifts from 1:00 PM to 10:00 PM or 2:00 PM to 11:00 PM or 3:00 PM to 12:00 AM IST.
Technical Experience:
- Must be aware of Ansible and Terraforms
- Exposure to shell scripting added advantage.
- Must be aware of cloud services especially Devops AWS
- Independently work on user stories
- Expected to work on documentation. Prepare it and update it when required.
Middleware Engineer
Work Location: Remote
US Shift: 6:00 PM IST to 3:00 AM IST
Experience: 4 - 8 years
Required Skills: Middleware, API Gateway, IIS, Puppet OR Ansible OR Terraform, JBoss, Tomcat
Job Description:
Technical Skills:
- Experience in supporting Middleware operations including building, maintaining, and decommissioning of services and infrastructure in an Cloud environment
- Working knowledge of AWS infrastructure, tooling, and architecture
- Experience with windows and linux OS and supporting tooling for enterprise (Services, IIS, Puppet, JBoss, Tomcat)
- Experience with tooling infrastructure commonly found in enterprise (API Gateway, MFA, SSO)
- Experience with setup (installation & Configuration) of systems build on Java and .Net application stacks
- Expected Tasks (sampling)
- Security patches
- Fix patches
- Jboss administration and maintenance
- Tomcat maintenance and puppet module enhancements
- IIS maintenance and puppet module enhancements
- Kafka and API Gateway maintenance
- Create package for new binary upgrades and create SOP's
- HA/Cluster/Replication setup
- Middleware Vulnerability remediation only
- MW Decommission
- MW migration/consolidation in same environment
- MW Service pack/version upgrade/patching
- New MW installation & configuration
- Support Audits (internal/external) including collection and reviews of all evidence and controls in scop
- Support Client, 3rd party & vendor Audit activities
- Work with OEM's for Problem resolution of major/chronic issues
Pipeline Support Engineer
Work Location: Remote
Experience: 4 - 8 years
Required Skills: Jenkins, Bitbucket, Puppet OR Ansible OR Terraform, Artifactory, Bitbucket, Jenkins, CICD EKS Cluster
Job Description:
Technical Skills:
Software Delivery and Cloud Platform Service Support
Functions:
- Jenkins Administration
- Bitbucket Administration
- Puppet Administration
- Artifactory Administration
- Provision Access requests in Jenkins
- New Repo Creation in Bitbucket
- Devops Escalation Point of Contact (deployment/troubleshooting/support)
- End user tools support (Jenkins, Bitbucket, Puppet, Artifactory, etc.)
- Tools upgrades/Patching (Bitbucket, Jenkins, CICD EKS Cluster)
- AMI Pipeline Support
- Cloud tools licensing and Certificates
- Compliance and Audits for Cloud technologies
- ECS / EKS / Containers Escalation Support
- Tier 3 support for Configuration Management development
Cloud Technical Lead
Work Location: Bangalore
Experience: 12-14 years
Required Skills: ELK, Splunk, Elasticsearch, AppDynamics, Dynatrace, Solarwinds, Nagios, Grafana, Prometheus, BigPanda, ITIL
Job Description:
- 12+ years of experience with implementation, operations, maintenance of IT systems and/or administration of software functions in multi-platform and multi-system environments.
- Min 4-5 years of experience working on Cloud.
- Should have knowledge on Monitoring tools/platforms. Experience in ITIL Framework and Service Delivery.
- Hands on experience in design and implementation of a monitoring solution/ framework for a complex setup.
- Experience implementing and delivering monitoring solutions in development, QA, and Production environments.
- Demonstrate competence in shell scripting and high-level programming languages. Strong focus on python.
- Previous experience defining, creating, and supporting monitoring dashboards.
- Experience working across departments evangelizing and communicating observability expertise and standards.
- Possess practical knowledge and appreciation of various aspects of distributed service design, including messaging protocols, caching strategies and autonomous software design practices.
- Experience with monitoring and observability tools and methodology of products such as; ELK, Splunk, Elasticsearch, AppDynamics, Dynatrace, Solarwinds, Nagios, Grafana, Prometheus, BigPanda, Datadog, Site24*7, etc.,
- Strong understanding of Open Systems Interconnection model (OSI model).
- Solid understanding of performance metrics, KPIs, statistical calculations, machine learning, and correlation.
- Ability to solve problems across the entire stack - operating systems (Linux/Unix/windows), software, application, and network.
- Responsible for design, development, testing and implementation of monitoring applications to meet business process and application requirements
- Good understanding of alert management and logs for IT systems – servers, storage, network, database etc,
- Should have solid experience using observability data to debug systems, to reduce the frequency and length of production incidents, and to provide a cohesive overall view of systems health.
- Will build and maintain solutions for getting insights on infrastructure and services supporting applications with focus on logs, metrics and application traces that improve Observability.
- Should think about the problem end-to-end: automation of data collection from common data sources, store data efficiently in Application Performance Managing and Monitoring tool, render this information for the user based on the defined SLOs and SLIs and finally, focus on the actions, define and deliver activities on the monitoring roadmap.
- Collaborate with operations & engineering teams, application developers, management and infrastructure teams to assess near- and long-term monitoring needs.
- Implement, maintain, and consult on the observability and monitoring framework that supports the needs of multiple internal stakeholders.
- Keep an eye on the emerging observability tools, trends and methodologies, and continuously enhance our existing systems and processes.
- Participate in process development with our Engineering and Development teams.
- Effectively communicate tool capabilities and processes to varying stakeholders.
- Collaborate with other teams on improving our observability systems.
- Assist with driving monitoring and observability standards to improve consumer experience of mission critical applications, services, and business processes with a strong focus on the end-to-end journey.
- Assist in scheduling and hosting regular tool training sessions to better enable tool adoption and best practices.
- Provide input on improving the global operating model for monitoring and observability services.
- Fine tune the monitoring solutions to give right incidents to the IT services teams.
- Troubleshoot performance and operation issues that arise with monitoring platform.
Cloud Enablement Engineer
Work Location: WFH
Experience: 4 - 8 years
Job Description:
Runway Design and New Service Enablement
Functions:
- Define Cloud Standards
- Enables new AWS/Cloud Services (SDLC)
- CI/CD Tools Re-architecture
- Terraform templates
- New CI/CD Pipeline Creation
- New AWS Account Creation and Standards Management
- Application of Security Controls
- Containers Architecture and Engineering
- Governor Enhancements and Support (S3, IAM, Reaper, etc.)
- Cloud security and cost optimization
- Assists with overflow from Pipeline Support
Cloud Infrastructure and Operations Engineer
Work Location: WFH
Experience: 4 - 8 years
Required Skills: Linux, Windows Server security patching (AWS), Disaster Recovery Server security patching, Windows and Linux Image patching (AMI), Server security configuration and management
US Shift Time: 6:00 PM IST to 3:00 AM IST
Responsibilities:
- Cloud operations team supports and configures Windows OS, Linux OS and manages all data protection services
Functions:
- Support of OS and systems infrastructure for all new builds in AWS
- OS release management - certification of new OS versions
- ASG AMI security patching
- Patch management
- Coordination with EOC for SOP creation to resolve Level 1/2 tasks
- Escalation of Incidents and Outages where OS and system issues are involved
- Decommission of systems and environments
- Security and Compliance: MAR, SOC2 and other audits
- Linux Server security patching (AWS)
- Windows Server security patching (AWS)
- Disaster Recovery Server security patching
- Windows and Linux Image patching (AMI)
- Server security configuration and management
- Infrastructure software deployment
- Administration account controls, password reset, and audit compliance
- Infrastructure and systems management configuration
- Linux entitlement report creation
- Historical vulnerability reporting
- Risk registry tracking, reporting, closure
- Vulnerability remediation including OS patches and MBS config compliance for ALL server in AWS
- Capacity and performance analysis and remediation
- Data protection and backup - Work with vendors
- L1 support – EOC
- L2 support - India-based team, take direction from SMEs
Enterprise Operation Center Engineer
Work Location: WFH
Experience: 4 - 8 years
Required Skills: OS Support, AWS platform taks, Commvault, Splunk, Managed FTP, Jenkins execution
US Shift Time: 6:00 PM IST to 3:00 AM IST
Responsibilities:
Job Description:
Experience in providing On call like support during regular business hours
- Ability work work flexible hours to fill out a 24/7 coverage cycle with staff located in multiple time zones from US and India locations.
- Experience matching Middleware Engineer
Expected Tasks
- Server Reboots
- Agent remediation (Puppet, Splunk, etc.)
- DNS adds
- Certificate generation
- Eyes on glass monitoring
- Responding to alerts (CPU, Diskspace, etc.)
- Execute SOPs
- Round the clock support (OS Support, AWS platform taks, Commvault, Splunk, Managed FTP, Jenkins execution)
Fusion Finance Functional Consultant
Work Location: WFH
Experience: 8 - 12 years
Required Skills: Oracle Cloud, Oracle Cloud Financials, Oracle Fusion Finance, RMCS ( Must have )
Primary Requirement:
Must Haves:
- Strong Oracle Functional expertise is required.
- Must have implementation experience with a deep understanding of Oracle Cloud.
- Required to have at least 2 or 3 full life cycle implementation experience in Oracle Cloud Financials and Procurement modules.
- Strong experience and Functional knowledge of Oracle Cloud Financials applications such as Accounts Receivables (AR), Advanced Collections, Credit Management, RMCS (Revenue Management Cloud Service), General Ledger (GL) , Sub-ledger Accounting (SLA), Accounts Payables (AP), Cash Management (CM), Fusion Tax and Funds Capture etc.
- Should have good understanding of Fixed Assets (FA), Order Management and Procure-to-Pay (P2P) functional areas.
- Ability to configure the Oracle Cloud Applications to meet business requirements and document application set-ups.
- Experience in executing implementation strategy, capturing business, systems requirements and analysis, prepare functional specification documents, solution designing, prototyping, testing, training, and implementing practical business solutions.
- Experience in configuring Enterprise Structures, Ledgers, Business units and Reference Data sets.
- Experience in maintaining and publishing Account Hierarchies for creating Financial Reports and Allocation definitions, cross validation rules and revaluation definitions.
- Strong experience with Chart of Accounts and Enterprise Structures design.
- Experience in configuring Sub-Ledger Accounting (SLA) rules for various sub ledgers in Oracle cloud Financials.
- Configuring Fusion Funds Capture to process credit card transactions.
- Ability to create reports using Financial Reporting Studio (FRS), SmartView, and Oracle BI/Transactional Business Intelligence (OTBI).
- Ability to Configure Lockbox payments and create AR Invoices & Receipts Oracle Cloud ADFDi/FBDI templates.
- Experience in setting up Approval Rules/Workflows in Approvals Management Extensions (AMX) through BPM.
- Good knowledge about Fusion Tax configurations (setting up Geographies, Tax Regimes, Tax Rates and Rules for both US and Canada Sales Tax and Withholding Taxes using external service providers such as Vertex, Taxware etc.)
- Good Knowledge about Data Conversion/Migrations, Inbound/Outbound interfaces and Reports.
- Ability to work with Technical resources, and other 3rd party systems/integrators to implement the project.
- Solid understanding of Oracle project methodology (OUM) and testing strategies such as Conference Room Pilots (CRP) and Process Playbacks (SIT and UAT etc.)
- Identify functionality gaps and supporting the development of solutions for them.
- Excellent communication skills, adapt in business interaction and understanding business applications.
- Capable of working in a fast paced, dynamic and team-oriented environment.
- Ability to multi task and still stay focused on release priorities in Fusion environment.
- Additional knowledge of Oracle Cloud applications such as Cost Management, Channel Revenue Management and Projects (PPM) processes a plus.
- Former Accounting and Oracle ERP (R12) background.
Senior Database Administrator
Work Location: Bangalore ( Hybrid )
Experience: 10 - 12 years
Required Skills: Oracle DBA, AWS Cloud, AWS RDS, Scripting, Oracle
Primary Requirement:
Must Haves:
- Database Administration experience with on-premise and cloud-based environments.
- Experience working on MySQL/Postgres/AWS RDS MySQL/Aurora /Postgres/Amazon Redshift environment.
- Experience working on OCI – DBCS, ADW environment.
- Administration experience with NoSQL databases – Mongo.
- Should have enough knowledge on AWS services like EC2, S3, IAM roles/policies and AWS cloud governance.
- Should have knowledge on migrating database to AWS RDS using DMS & Schema conversion tool.
- Experience in scripting RDS MySQL backup and restore across environments.
- Having knowledge of working with CLI command line tools like AWS CLI & OCI CLI.
- Proactive monitoring of databases both from a performance and capacity management perspective.
- Experience with backups, restores and recovery models. Proactive housekeeping/archiving.
- Experience in Performance Tuning and Optimization, using native monitoring and troubleshooting tools.
- Should have enough knowledge on OCI services like DBCS, Autonomous services, cloud storage, notifications and alarms.
- Experience in monitoring Oracle databases via SQL developer, OMC tool, OCI metrics & performance hub.
- Experience in implementing Oracle alerts via OCI alarm & metrics.
- Experience in monitoring ASM usage and archive log usage via scripting.
- Should have enough knowledge in troubleshooting Oracle tablespace issue, index issue & database mount issues.
- Familiar with Oracle CDB’s and PDB’s and experience in moving PDB’s across environments.
- Should have knowledge on administrating and working with ADW database.
- Experience in working with wallets and ability to create password-less wallets.
- Familiarity with Database Monitoring and alerting tools – SolarWinds DPA, CloudWatch, Opsgenie.
- Administration experience on Solarwinds DPA monitoring tools like application upgrade, rebuild prod app data on non-prod environment, agent monitoring & configuring alerts.
- Experience in troubleshooting and resolving database problems using various troubleshooting tools.
- Ability to detect and troubleshoot and resolve database related CPU, memory, I/O, disk space and other resource contention issues.
- Experience in implementing operational automation using Shell/Perl/Python scripts.
- Perform database maintenance activities such as backup/recovery, replication, rebuilding and reorganizing indexes.
- Experience in troubleshooting and resolving database integrity issues, performance issues, blocking/deadlocking issues, connectivity issues, data replication issues etc.
- Experience in raising support ticket with various vendors like AWS, Oracle, Solarwinds and work with them and get issue to the closure.
- Good interpersonal, communication and documentation skills. requirement.
Workday Developer
Work Location: WFH
Experience: 6 - 10 years
Required Skills: WorkDay Studio, WorkDay Integration, API, XML, XSLT, Core Connectors, PICOF, PECI
Primary Requirement:
- Minimum 7-10 years of experience working with HRMS systems e.g. Oracle, SAP, PeopleSoft etc
- Minimum 3-4 years’ experience with Workday
- Must have Workday Technical or Techno-Functional experience
- Needs knowledge in Reports, Calculated Fields, Performance tuning and Security
- Experience in end to end implementation for at least one project is mandatory
- Should have worked in Workday Studio and it will be better if resource is certified
- EIB, Core Connector, PICOF and PECI Integration knowledge is very important as all new request are handled through this type of integrations
- XML and XSLT knowledge is must
- General knowledge and design of at least 2-3 Workday functional areas
- Good understanding of Workday security
- For Kronos Interface “Workforce Central” and Technical knowledge on how to build and maintain Integration in Kronos is expected
- Sound experience in SOAP/REST Oracle Cloud API’s to Import and Export Master/Transactional data
- Workday integration related certification is preferred
- Work independently or with Integration Lead/Tech Lead and/or Project Manager to participate in integrations delivery
- Gather requirements and prototype the integrations
- Maintain continuously and support current integrations by resolving defects (if any), enhance and apply required updates, as necessary
- Perform design, development, testing and deployment of integrations
Cloud Engineer
Work Location: Hybrid (Bangalore).
Experience: 1 - 2 years
No.of Positions: 2
Required Skills: AWS/Azure, Python/Powershell, ITSM, ITIL
Technical Experience:
- Experience with managed code or scripting – Python, JSON,YAML, Powershell AWS /Azure Certification
- 1-2 Years experience in working on Cloud technologies
- Bachelor’s degree in computer science, information technology or Bachelor’s in Engineering
- Experience in working with Cloud DevOps platforms
- Knowledge on any of the ITSM Platform (Ticketing system)
- Knowledge on ITIL Service Framework
- Good in Analytical and problem-solving skills
- Responsible for the evaluation of cloud strategy and program architecture
- Responsible for gathering system requirements working together with application architects and owners
- Responsible for generating scripts and templates required for the automatic provisioning of resources
- Experience working with AWS/Azure/GCP
- Experience with CFT, Terraform or Azure Resource Manager.
- Deploying and debugging cloud initiatives as needed in accordance with best practices throughout the development lifecycle
Oracle EBS Technical Developer
Work Location: Remote.
No.of Positions: 2
Experience: 9 - 10 years
Required Skills: Oracle EBS technical, E-business suite, Oracle forms 11g & reports 11g, workflow, BI Publisher/XML publisher, O2C (order to cash), Inventory.
Technical Experience:
- 10+ Years of Oracle EBS Technical and Development Experience.
- Experience working with Oracle Applications version 12.1 onwards.
- 10+ Years of Experience in Designing and developing RICEW (Reports, Interfaces, Conversions, Extensions and Workflow) components.
- Strong Understanding and working knowledge of End-to-End Processes like Order to Cash, Supply chain.
- Strong Hands-on Development experience in Oracle EBS Modules like, Order Management, Account Receivable, Payments (IBY), Inventory (INV), Contracts (Service & Subscription), Customer Master, Trade Management.
- Strong experience with SQL, PL/SQL, UNIX shell scripting, Workflow and SQL*Loader.
- Solid Experience of creating Packages, Procedures, Functions, Views, Exception handling, SQL*Loader in Oracle., PL/SQL and SQL PLUS.
- Experience Building and customizing OAF (Oracle Applications Framework) Pages.
- Experience in Oracle Applications Performance tuning.
- Experience and understanding of the Java and related J2EE technologies development.
- Strong experience with XML, XSD and XSLT.
- Worked on the SLAs and ticketing processes.
- Monitor and manage day-to-day technical tasks and scheduled job failure.
Warehouse Management Technical Developer
Work Location: Remote.
Experience: 7 - 10 years
No.of Positions: 2
Required Skills: Logpro/PKMS, WMI, AS400, CLLE, RPGI.
Technical Experience:
- Strong hands-on experience in AS400, including CLLE, RPGIL/Free, SQL, FTP, DB2 access tools.
- Hands on experience on Turnover and Hawkeye tools.
- Experience with LogPro/PKMS - Manhattan products is a plus.
- Experience with WMi (Warehouse Management for iSeries) - Manhattan products is a plus.
- Good knowledge on Warehouse Management systems and Supply chain.
- Strong Analytical Skills to analyze Legacy programs and design business logics for existing programs.
- Working together with business and technical team members in a harmonious way.
- Strong communication and interpersonal skills, with the ability to effectively engage individuals at all levels of the organization.
- Strong conceptual, analytical, problem-solving, troubleshooting and resolution skills.
- Able to work effectively and produce consistent results with minimal supervision.
- Demonstrates a positive attitude, is self-motivated, responsible, conscientious, and detail oriented.
- Ability to effectively mentor less experienced team members on programming and testing techniques.
- Able to support off-hours work as required, including weekends, holidays, and rotational on call.
- Ensures all incidents are resolved against SLAs.
- Interfaces with vendors to troubleshoot and resolve software issues.
- Should get involved in QA testing and UAT activities.
- Day to day activities involve resolving support tickets and working on enhancements.
- Should be familiar working with JIRA tool in an agile environment.
Apply Here
AWS Data Engineer
Education and Experience Required: Bachelor’s degree in computer science, Information Technology, or related field.
Job Location: Remote position
Job Duration: 12+ months contract, Mon - Fri 40 hours per week.
Number of positions: 1 opening
Job Description:
- As a Data Engineer, you will bring to life, implement architecture blueprints, and help our team by mapping out solutions to some of their complex technical challenges.
- You'll provide technical expertise, mitigate risk and offer solutions tailored for their needs. From migrations of existing workloads to building advanced cloud solutions, you'll help shape and build to increase agility, improve security, reduce costs and meet utilization targets.
- You will work closely with the Data Engineering, Data Science, Infrastructure Architecture & Platform teams and will focus on creating blueprints to nourish a culture of engineering excellence.
- You will report into the Sr. Managing Director Big Data Governance.
- Responsible for developing, testing, and generating reports or dashboards from a host of systems and delivering results to clients.
- Plan and conduct comprehensive analytics services including reporting and predictive analytics. Provide expert level technical and project leadership.
- May manage and be a key contributor on multiple projects simultaneously.
Responsibilities:
- Support data engineering needs across HBS, including technical support and training in Big Data frameworks and ways of working, revision and integration of source code, to release and source code quality control Designing and producing high performing stable end-to-end applications to perform complex processing of batch and streaming massive volumes of data in a multi[1]tenancy big data platform in the cloud, and output insights back to business systems according to their requirements.
- Design and implement core platform capabilities, tools, processes, ways of working and conventions under agile development to support the integration of data sourcing and use cases implementation, towards reusability, to ease up delivery and ensure standardization across data deliverables in the platform.
- Work with the architecture team to define the strategy for evolving the Big Data.
- Evaluate and help determine the technologies to be used on the Big Data Platform. Investigate new technologies for improving and future-proofing the platform. Collaborate on implementation efforts with various technical teams within HBS, Harvard, and vendors as needed.
- Responsible for other duties as assigned.
- Interview business owners to understand business problem, questions, and pain points; Collect, analyze and present findings to identify trends, causes, risks and opportunities.
- Conversant with qualitative and quantitative techniques, including predictive analysis and visualization.
- Develop business applications to provide a comprehensive suite of self-service solutions; may test prototype software and participate in approval and release process for software.
- Implement solutions to standardize and systemize routine reports, dashboards, and metrics.
- Advocate for and engage in the implementation of data civics, including building data catalogs, documentation efforts, educating stakeholders, and standardizing access to data. Abide by and follow the Harvard University IT technical standards, policies and Code of Conduct.
Basic Qualifications:
- Minimum of five years’ post-secondary education or relevant work experience Additional Qualifications and skills
- Bachelor’s degree in Mathematics, Physics, Computer Science, Engineering, Statistics, or an equivalent technical discipline or 5 years of experience is required
- 3-5 years of experience in a hands-on technical role as an engineer and/or data engineering working at scale in collaborative coding and cloud environments is required
- Public projects or demos a plus
Additional Qualifications and skills:
- Bachelor’s degree in mathematics, Physics, Computer Science, Engineering, Statistics, or an equivalent technical discipline or 5 years of experience is required.
- 3-5 years of experience in a hands-on technical role as an engineer and/or data engineering working at scale in collaborative coding and cloud environments is required.
- Public projects or demos a plus.
- Experience with common build tools, unit, integration, functional and performance testing from automation perspective, and continuous delivery, under agile practices is necessary.
- Experience with AWS strongly preferred, including data stores, compute engines, container services, and build tools. For example, S3, RDS, Redshift, Glue, EMR (Elastic MapReduce), Kinesis, Lambda, Step Functions, DynamoDB, Data Pipeline, CloudFormation.
- Must be fluent in Python, SQL, and coding best practices. Must be comfortable working programmatically with complex data structures (e.g., highly nested JSON or XML, nested Python lists and dictionaries).
- Must be familiar with Object Oriented design patterns.
- Familiarity with scientific computing packages and platforms such as pandas and Spark preferred.
- Experience with Gen AI prompt engineering and RAG architecture preferred.
- Expert-level experience in designing, building and managing applications to process large amounts of data in a cloud ecosystem or other big data frameworks is a must.
Senior Machine Learning Engineer, Generative AI Applications
Education and Experience Required: Bachelor’s degree in computer science, Information Technology, or related field.
Job Location: Remote
Job Duration: 6+ months contract, Mon - Fri 40 hours per week.
Number of positions:1
Job Description:
Duties and Responsibilities:
- Architect, build, maintain, and improve new and existing suite of GenAI applications and their underlying systems.
- Automate machine learning pipelines, monitor performance and costs, and optimize models by using techniques such as LoRA/QLoRA.
- Establish reusable frameworks to streamline model building, deployment and monitoring. Incorporate comprehensive monitoring, logging, tracing, and alerting mechanisms.
- Build guardrails, compliance rules and oversight workflows into the GenAI application platform, such as establishing approval chains for model updates and staged rollout for production releases
- Develop templates, guides and sandbox environments for easy onboarding of new contributors and experimentation with new techniques
- Ensure development of user-facing applications in the GenAI application platform is easy and safe by enforcing rigorous validation testing before publishing user-generated models and implement a clear peer review process of applications.
- Use your entrepreneurial spirit to identify new opportunities to optimize business processes, improve consumer experiences, and prototype solutions to demonstrate value.
- Work closely with data scientists and analysts to create and deploy new product features online and in mobile apps.
- Contribute to and promote good software engineering practices across the team.
- Mentor and educate team members to adopt best practices in writing and maintaining production machine learning code.
- Actively contribute to and re-use community best practices.
- Monitor, debug, track, and resolve production issues.
- Work with project managers to ensure that projects proceed on time and on budget.
- Collaborate with Technical Product Managers to ensure proper tracking of algorithmic performance KPIs and prioritize performance improvements based on effort and impact.
- Complete other responsibilities as assigned.
Required Skills and Qualifications:
- Minimum of seven years’ post-secondary education or relevant work experience
- Bachelor's degree in mathematics, physics, computer science, engineering, statistics, or an equivalent technical discipline.
- Minimum of five years’ software development experience with Python and SQL.
- Minimum of three years’ experience building pipelines to deploy NLP and deep learning models into production in a cloud environment
- Minimum three years’ experience using PyTorch, Tensorflow, or MXNet, along with optimizing code for GPU clusters
- Experience building advanced workflows such as retrieval augmented generation, model chaining, dynamic prompting, PEFT/SFT, etc. using Langchain and similar tools
- Experience establishing model guardrails and developing bias detection and mitigation techniques for AI applications using tools such as NeMo
- Experience with various embedding models and setting up and tuning vector databases to improve performance of semantic search and retrieval systems
- Understand the underlying fundamentals such as Transformers, Self-Attention mechanisms that form the theoretical foundation of LLMs
- Experience working with a variety of relational SQL and NoSQL databases, big data tools: Hadoop, Spark, Kafka; a Linux environment; (AWS).
- Knowledge of data pipeline and workflow management tools.
- Expertise in standard software engineering methodology, e.g., unit testing, test automation, continuous integration, code reviews, design documentation.