Hello! I'm Samarth
I am an experienced Technical Cloud Architect with hands on experience/Tech Lead with a proven track record of success. Currently I am employed at Cyberender LLC as a Cloud Architect. Previously , I was employed at Pixeldust Technologies, where I contribute my expertise to one of the largest telecommunication companies.
Previously, I held the role of Principal Software Engineer at Atlan, where I specialized in Full Stack Microservice Cloud Development, DevOps, GitOps, and Enterprise Architecture. Prior to that, I worked as a Full Stack Microservice Cloud DevOps Engineer at Here Technology and as a Senior Software Engineer at Oracle. During my tenure at Goldman Sachs and MasterCard, where I was employed through Accenture, I served as a Full Stack Java Developer and Cloud Engineer.
Throughout my career, I have consistently sought opportunities to learn and innovate, resulting in the successful implementation of proof of concepts (POCs) that have positively impacted my work environment. I have played a critical role in architecting microservices and selecting the most suitable technology stack to meet business requirements, leveraging my expertise in web technologies.
Beyond my professional contributions, I am an accomplished educator. As a publisher of two highly-rated and best-selling Udemy courses focused on creational design patterns and the Spring Boot framework, I have shared my knowledge with a wide audience. Additionally, I have published several highly-rated technical certification courses on platforms such as Coursera, TutorialsPoint, Simpliv, and have collaborated with esteemed organizations like Great Learning and MicroStreams for content creation and delivery. Furthermore, I have provided live training sessions and delivered comprehensive cohort training programs to prominent financial institutions and global product companies.
My dedication to excellence is evident in my ability to design and deliver best-in-industry curriculums and content, providing valuable insights and empowering individuals and organizations to succeed in their respective domains.


Samarth Narula
Cloud Architect | AWS + GCP Architect Professional Certified | Cloud migration Expert | DevOps Expert: Release Process, DR Plan, BC Plan | Project Management | Microservices | I Dare to Experiment with Latest Technologies | Course Publisher on Udemy, Coursera, TutorialsPoint, Edunix | Live Trainer and Content Creator for Great Learning & Microstream
Phone:
(+1) 7272255610
Email:
Address:
Florida, USA
Date of Birth:
August 11th, 1995










EXPERIENCE
6th May 2024 - Present
Position: Cloud Architect
Role: Cloud Architect
Type: Full Time Employment
​
Cyberender LLC
Building Scalable Cloud Solutions and Architectures.
Collaborating with various teams for architectures designing, application development and deployment.
Lead team of Argo Products like ArgoCD, Argo Workflows, Argo Rollouts, Create or Optimize CICD deployment pipelines
Collaborating with Security Team to ensure Cloud Security. Stopping active cyber attacks
Create & Monitor infrastructure to ensure uptime using terraform, Cloud formation, Crossplane etc
Translating client requirements and Business Requirement Documents into technical deliverables and create JIRA EPIC and
stories for other engineers and myself also, ITSM process, Incident management, IAM key rotation, Maintain VPC networking,
Maintain VPC networking, Cloud Cost Optimization, Conduct & Perform RCA and implement fixes.
Assist with Cloud Migration Strategy, Create and optimize Release Management Systems, Setup and manage EKS and GKE
clusters
Strategic Leadership & Architecture: Led teams in Platform/Application Development, Data, and DevOps. Developed DR/BC
plans, conducted Cloud QA, and evaluated SAAS products. Created POCs, MVPs, and critical documents (Feature Evaluation,
Competitive Analysis, SOPs, Business Justification, Technical Architecture Document, Solution Architecture Document, Platform
Architecture Document). Designed Cloud, Data, Application, Microservice and Low Code architectures.
Full-Stack Solution Development: Translated business problems into web app solutions. Led UI/UX design, stakeholder
approvals, team building, and product development. Managed full deployment cycle, including stakeholder demos and end-
user presentations. Provided team member feedback and performance evaluations.
Technical Expertise & Project Execution: Implemented serverless Python applications and Java-based microservices.
Developed AWS solutions (S3, Lambda, EMR, VPC, KMS, Glue, Redshift, Athena). Created Python utilities for AWS and GCP
services. Managed GCP support, including IAM and Org policies. Implemented security measures like XDR
1 December 2022-6th may 2024
Position: Technical Architect
Role: Technical Architect Tech Lead
Type: Full Time Employment
​
Pixeldust Technologies
Role: Technical Architect & Team Lead
Building Below Architectures:
1) GCP Cloud Architectures
2) DevOps Architectures
3) 10x Data Engineering enablement Architectures:
4) Extendable Tier Based Terraform Architecture
Responsibilities
1) Drive all Tech related decisions end to end for all the umbrella of projects under Cloud Enablement.
2) Choose the best technology & Standards to be followed across projects.
3) Collaborate with existing folks & Identify and assign all task to DevOps & Senior Platform Engineers that needs to be performed for various architectural component migration from on-premise Cloudera based Hadoop Distribution ecosystem to GCP cloud.
4) Collaborate with senior management level & Software Engineers for driving and ensuring proper execution of technical task and exploring and choosing technologies for tech stack.
Project : Confidential
Work Done:
1) Analyse and choose the best IAAC Tool in the industry. Compared Terraform vs Pulumi vs Crossplane.
2) Analyse and choose Git Branching Strategy and set Source Code Repositories structuring standards.
3) Decide the Terraform profiling strategy. Compared Workspaces vs Branching vs using Terragrunt vs directory structure
4) Analyse and Choose the best NoSQL data store.
5) Evaluate Kubernetes multi tenancy strategies namespaces vs Vclusters
5) Evaluate Prophecy.io and other similar tools which offers Visual Editor tools for creating Spark Jobs and Airflow workflows.
6) Create Cloud Architectures in a way which enables 10x Data Engineering Teams in Organisation.
7) Combine Visual editor tools like Prophecy.io for 10x speed Spark Job Creation and Managed Apache AirFlow instance tool like Astronomer.io for easy DAGs creation and good support for underlying infra and BI tools for 10x Reporting.
8) Translate on-prem Access Management to GCP IAM, identity and Architect different Service Accounts, Roles to be created for GCS, BigQuery,dataproc
Initiative: Prophecy POC
Description: 1) Installed Prophecy tool onto GCP GKE and setup dataproc cluster, bastion host by leading a team of 2 Cloud Engineers and collaborated with 3 Prophecy Engineers
2) Created various different types of ETL pipelines using Prophecy Tool Visual Editor feature.
3) Evaluated prophecy compatibility for ML Workflows. Collaborated with various Data scientist to learn from their learnings
4) Created these documents: Competitive Analysis, Business justification, Technical Setup and Feature Evaluation Sheet
5) Presented the Prophecy tool to wide set of audience consisting of various Data Ecosystem personas like Senior Data Scientist, Data Analysis, Data Engineers, CDP Architects, Cloud Architects, Telekom Architects, various regional heads.
6) Also collaborated with above set of personas for gaining more insights for feature evaluation and our business use-case validation.
7) Some cool stuff, got before hand previews of Generative AI features and how it makes the usage of Low code tools even easier.
​
initiative : Table Classification Product
Role : I wear multiple hats( Solution Architect/ Product Architect/ Full Stack Cloud DevOps AI Engineer/ Team Lead / Product Owner/ Technical Evangelist/ Sales Engineer)
Business Accelerator: We need to ingest 32k tables from on-prem to Cloud but we can only ingest TKG complaint Data to Cloud.
Solution: I proposed a solution of Web App using which users can set compliance status of tables and define association of tables and delegate ownership of tables to other users.
Tech stack:
Machine Learning: openai
Front end: JavaScript, React, Material UI
Backend: Java, Spring Boot
Database: Neo4J
deployment: GCP Cloud( GKE, GCS, CloudBuild, Jumphost vm, Cloud DNS) and Gitlab CICD, Docker, Kubernetes, Helm, Cloud build, Scaffold, Python script for database data loading after deployment.
Designing; Figma
Builded multiple forms like Login, Register, Compliance, association, delegation, profile, all tables, proposed fancy Open AI feature for data search using GPT model (LLM).
builded gitlab pipelines and GCP Infra for product deployment and then in next iteration will build analytics page, notification flow, smtp integration, request flow and open AI feature if approved.
I will also optimize Gitlab pipeline by using Cloud Native deployment. Then also sync with business to extend the scope of product for enabling migration of other data hubs in the company.
Currently Delivered first iteration in 5 months, made the product available for end users by domain name and also work as Technical evangelist to help adoption of my product within the company.
For this project, I choose a team of 5 Freshers,1 UI/UX, 1 senior Frontend Engineer, 1 Cloud Engineer and successfully turned all these engineers into multi skilled engineers within 3 months, now they are like me, they can wear multiple hats and deliver with confidence any challenging technical task.
April 2022-November 2022
Position: Principal Software Engineer
Role: Principal Software Engineer
Type: Full Time Employment
Atlan
Description: We are pioneers into K8 based Multi Tenant Automation. You must have heard Virtualisation & Namespaces, what we have done is many levels above that
​
Project : Silo Multi-Tenant Model Setup on Customer AWS Account Automation
Position: Principal Cloud Engineer
Tech Stack: AWS Services like Cloud formation, SNS, S3, EKS, EC2, Resource Groups, Lambda Functions, System Manager Documents then Parameter Store, Secret Manager, RDS, Cloudfront, VPC and then Kubernetes, Docker, Helm Charts, Argo CD and Agro Workflows, Github Actions, Java, Python and Polyglot Programming.
Work Description : Architected and Implemented the Automation. Customers will use a cloud formation template, which will spin all the infra required for the product and also generate kubeconfig file and values.yaml file(with customer public info) and place by executing k8 job while cf template execution and place it into S3 bucket and then cf template will also create a lambda function which will have S3 create event trigger, which will generate a pre-signed URL for the k8 config amd values.yaml file and send it to our public SNS topic and in our account I have written another Lambda which will process this pre-signed url and store the files received data in secret manager and RDS postgres aurora database and then this lambda will trigger a multi-step Argo Workflow:
Step:1 Add customer cluster in our Loft VCluster instance.
Step 2: Create Vcluster in Physical VCluster
Step 3: Update cloudfront orgin in customer account access through k8 job using k8 service account which has assumed a customer aws account role with required permissions.
Step 4:Create parameter store in customer parameter store, copies parameters from our parameter store to customer parameter store.
Step 5:create literal file for registering app on our ArgoCD.
Step 6:Deploy Atlan under VCluster: Creates app in ArgoCd and lets it sync the helm charts changes and deploy atlan product
All of the above completed in 74 days of joining the company
After the above work completed, now
-
Enhancing the Customer AWS Multi-Tenant Setup, builded the tenant off-boarding workflow, figuring out solutions for all tenants off-boarding workflow.
-
Helping in identifying and finding solutions for AWS Accounts Cost Optimisation.
-
Helping new joiners in project to get upto Speed and creating training material for new joiners
Project : Existing Multi-Tenant Setup
Description: Deploy Atlan instance to our AWS EKS Cluster VCluster using Loft VCluster, ArgoCD, Argo Workflows, Helm Charts and for IAAC Terraform
​
Project: AWS Cost Optimisation
Role : Solution Architect
1. Identified key services in use causing high cost using AWS Quick Sight reports.
2. Connected with couple of Senior Startup Solution Architect guys from aws company to get more ideas and also attended aws cost optimisation seminars to learn more techniques.
2. Promptly brought down cost by 800$/day by disabling dev environment EKS CloudWatch Logs.
3. Recommended the below technics to further bring down the overall cost :
* Update existing eks nodes to use AMD series Processor across all environments.
* Move all OnDemand instances to all spot for dev Environment Instances and One Ondemand and rest spot for Production via Terraform Script as Terraform was used for spinning all the infra, if we do directly from aws console it will lead to terraform state file drift. As the state was stores in S3 bucket.
* To enable logging back in dev environment reduce the logging period and configure lifecycle rules for moving data to glacier storage.
* For Lambda use ARM based architecture, migrate all existing *86 Architectures to ARM based Architectures
4. Assigned these task to other engineers in team and ensured proper execution.
RND projects:
1) POC: Send Cross Account AWS SNS Topic Notification from Lambda Function.
2) POC: Test and check if we can send cross account sns notification from AWS CloudFormation using AWS CLI, it is possible by using CLI but not through AWS Console.
Outcome: Raised a AWS Feature request on AWS Cloud Formation RoadMap GitHub Repo to enable sending cross account SNS topic notification using AWS Console.
All of the above completed in 15 days
​
Project: Create Release Management RoadMap for the Company
Role: Project Owner & Enterprise Architect | One Man Army Role.
Work Done:
1. Identified Problems with current release processes.
2. Explored different solutions in market which can be used as Docker Image and Helm Chart Package Repository.
3. Listed pros and cons for different registry solutions and cost for each Solution.
4. Finally proposed solutions for Helm Chart Repository & Docker Repository, Plugins and architectural changes to enable proper Semantic Number Versioning of Artefacts in automated fashion by using git action and Sever side git hook and Client side hook in automated fashion by using Husky (NPM Package) and installing git cz plugin. 5. Also Recommended the future enhancements like since we use Github, automated release documentation with Github Pages, Enable Github Community for long term growing collaboration for engineers. 6. While creating this roadmap compared two data transfer cost effective solutions.
5. Then lead a team of 2 engineers in implementing the proposed architecture during the initial few task, then alone everything
6. Created various estimated cost reports using AWS Calculator.
7) Made and executed critical decisions around Private ECR(no caching support), VPC NAT vs VPC PrivateLink connection with EKS for imagepull, Cross Region And Cross Account Replication strategy.
8) Wrote Github Action Workflow, Scripts for CICD, push image to ECR, pull latest image from ECR, Create ECR repo if not exist, Create Lifecycle policy if not exist for ECR, keep last 10 github tags, delete all tag on feature branch deletion. Semantic Versioning.
11) Design Strategies with principal of least privilege around ImagePull in eks from ecr, Image push from github workflows using IAM OIDC Roles.
12) Design and Architect Strategies for rollback mechanism, deployment strategies like blue green, Canary, rolling updates, Caching, Replication. Automated Image Update with ArgoCD image updater etc.
​
Project: Create Loft VCluster Course
Role: Content Creator and Course Instructor
Work Done:
1. Created Curriculum and Content for the course.
2. Created Enterprise level Spring Boot Application.
3. Created kubernetes deployment and load balancer type service yaml files for the spring boot application.
4. Created AWS EKS Cluster and deployed spring boot application into it.
5. Created Helm Chart for Spring Boot application.
6. Deployed loft VCluster into AWS EKS and created VCluster and deployed Spring Boot application into it.
Other Architectural Discussions:
1. Interactions with Google Employees and our vendor partner for POC on Google Anthos.
2. Interactions with junior engineers working on different cutting edge new technology initiatives.
3. Interactions with Amazon software engineers and solutions architects for solving specific architectural problems.
​
General Roles & Responsibilities:
1) Planning engineering strategies for a company
2) Implementing process improvements
3) Managing engineering departments in tasks like research and design
4) Providing expert advice to other engineers
5) Determining department goals and creating implementation plans
6) Creating and managing engineering budgets around estimated AWS service usage, Github Plans and other toolings.
7) Implementing process improvements
8) Architecting, Planning, Managing & also Executing when other junior engineers are not available.
Exposure and walkthrough of new technology initiatives around SRE, Security and Automation.
Exposure to kubecost, teleport, Datadog, Rootly etc
​
​
Aug 2020-31st March 2022
Position: Started as Software Engineer II got promoted to Senior Software Engineer
Role: Full Stack Microservice Cloud DevOps Engineer
Type: Full Time Employment
HERE Technologies
Description: Maps company, has many different products that provides map data in different formats and visualizations.
Product: HDMaps
Description: Created a Java AWS Utilities and Scala AWS utilities
Product: Wall-E Lanes
Description: Created two ETL Projects and pipelines for reading Protobuf data from one proprietary data source then created algorithm for processing this data and creating a geojson data out of it and then finally publishing this data to HERE Technology data catelogs.
Technologies: Java, Protobuf for reducing the size of data, Scala to reduce boilerplate code, Maven, Docker, Docker Compose for sequencing operations while deployment, Spark for parallelism, AWS EC2 for deploying, AWS EC2 builder for creating custom EC2 image, Jenkins, Here OLP pipeline, AWS RDS, Splunk for logging, AKKA for asynchronous communication.
Application: ETL Project
Description: Created a complex algorithmic ETL project to read one MOM format(geo json) data feature and create 3 seperate MOM feature objects out of it and for testing used geojson.tools open source utility tool for verifying MOM(Map Object Model) formatted input and output objects.
Purpose: Seeding Wall-E with lanes data from different data sources, Wall-E will enables highest level of Autonomous Driving.
Application : LEAS
Discription : Created a new environment in the existing complex pipeline, implemented PACT( Project Agreement Consumer Test), consumer side test for asynchronous communication with other microservice which was the provider. Then setup the gitlab pipeline for this new project that I created. Then added a new stage (pact_stage) in existing LEAS pipeline, using the concept of multiple projects pipeline. (basically from one project pipeline, triggering other project pipeline).
Then lead the efforts of PACT consumer side testing by identifying and creating JIRA's and assigning it to my team and reviewing and merging their MR's.
​
LDPS(Lane Derivation Preparation Service) Project :
Description: This project is part of Lanes Subsystem which has pipe-filter event-driven asynchronous architecture pattern and the core responsibility of this project is to improvise the geometry of lanes before providing it to its down stream service LD (Lane Derivation)
Work done:
1) Created shell scripts for deployment and un-deployment of kubernetes pods.
2) Provided DevOps support for this project which involved manual deployment and un-deployment of Flink pods like Task Manager pod, Job Manager pod, orchestration pod Data Hub Connector Pod (Data hub is a RESTful API based application for performing CRUD operation on DB), Streaming Application Pod (This Application constantly polls the DB for data and copies the data to OLP Catalog Topic(which is a Kafka Topic), AI/ML application pods, AWS SQS Pod( Reads and writes data to AWS SQS queues)
3) Did cleanup which involves cleanup of DB tables and AWS resources for this project.
4) Created new Environments and its required components like kubernetes namespaces, Database( Created and configured AWS RDS aurora PostgreSQL DB), Splunk Index and configured Log Forwarding.
5)Then Created Gitlab pipeline from scratch to automate deployment, un-deployment and cleanup of all components of LDPS project for all environments.
6) Then automated daily SIT(Standard Integration Test) environment cleanup task by creating a Gitlab Scheduler.
7) Also provided DevOps Support for other Services in Lanes domain which are part of Lanes subsystem in Walle like Monitoring Kubernetes pods health with help of Grafana Dashboards, checking splunk logs etc whenever it was needed.
8) Configured AWS VPC Peering between multiple AWS accounts within the organisation. Also solved VPC peering issues by updating routing tables and confirming correct routing.
9) Used and configured Kubernetes components extensively in this project.
May 2021-Dec 2021
Role: Course Instructor
Type: Freelancer
MicroStream
In collaboration with MicroStreams for providing upcoming trainings on Spring Boot Fundamentals and Spring Boot Advance Concepts and Integrations with Various Cloud Services and cutting Edge Technologies in October and November this year.
Sept 2020-Present
Role: Course Instructor | Course Publisher | Content Creator | B2B Engagement Manager
Type: Freelancer
Great Learning
Creating multiple courses on Microservices which are used for corporate trainings and PG programs and also provided live group trainings for entire cohort runs to Industry experts for Full Stack Programs. Also researching on new technologies and pioneering into new horizons as per the client expectations.
Aug 2020-Present
Role: Guided Project Publisher
Type: Freelancer
Coursera
I am publisher of 4 Guided Projects builded using Rhyme Platform based on Microservices with Spring Boot Framework and its integrations with other frameworks on Coursera and my guided projects have more than 1500 enrollments.
https://www.coursera.org/instructor/samarth-narula
Jan 2020-Present
Role: Publisher
Type: Freelancer
Udemy
I have published two top rated and best selling courses based on Microservices and Design patterns in Java on the most popular MOOC platform Udemy and my courses have more than 30000 enrolments `
Link Below:
Aug 2020-Present
Role: Publisher
Type: Freelancer
Tutorialspoint
I have published two courses on TutorialsPoint
Link Below:
https://www.tutorialspoint.com/videotutorials/profile/samarth_narula
May 2021-Aug 2021
Role: Technical Trainer
Type: Freelancer
Techmentry
I provide personal and group technical trainings on Java, Spring Boot, Microservices, Cloud technologies and majority of the currently trending frameworks in industry.
Feb 2017-Jan 2021
Role: Technical Trainer
Type: Freelancer
Bloombench
I have trained many IT Industry experts from all around the globe on Bloombench Platform. Majority of my students were from well known organizations like Morgan Stanley, ADP (Atlanta), Deloitte US, JPMC (Texas) etc.
http://www.kossine.com/professors/samarth%20Narula/5869
Dec 2019-Aug 2020
ORACLE FINANCIAL SERVICES SOFTWARE LIMITED
Role: Staff Consultant at Oracle and Senior Java Developer for State Street Bank
Type: Full Time Employee
Project Name: Open Trading
Description : As a Senior Java Developer at State Street Bank, my role is to understand what data is required by traders and portfolio managers to make business decisions and then based on understanding of business requirement pull data from different screens of Bloomberg terminal by using the Bloomberg API's and respective mnemonics and then write algorithms for processing and loading this data to oracle DB and flat files.There were no JUnit test coverage for all their applications, so I wrote JUnits and Mockito for all their applications and fixed many existing data issues. Also Sonar Gate was failed, So I fixed sonar issues and made all their applications pass the sonar quality report. Also fixed broken Jenkins pipelines.
Innovation: Migrated and decoupled all the existing legacy monolithic applications to microservice applications, r eplaced JDBC code with JPA and hibernate to resolve the open connection DB issues. Implemented design patterns where it fitted the best. Got rid of boilerplate code and made application more resilient by using Lombok Framework. Also created API Documentation
Free time ; POT's on Resource Pool
Description: While in resource pool first one month of joining made many POT's on Spring Boot Kafka, Eureka, Docker, AWS EC2, Lambda, Elastic Beanstalk, Code pipeline, S3, DynamoDB, Jmeter, Micro-service Design Patterns like API Gateway, SSGA, CQRS, Event Driven Design. Also researched on architect solutions provided by Microsoft, then did research on Chaos testing tools and methodologies required for Business continuity. Also made corona-virus tracker and deployed it on PCF Cloud for helping people all round the globe.
Nov 2017-Dec 2019
Position: Started as Associate Software Engineer got promoted to Software Engineer
Type: Full Time Employee
Accenture
Client: MasterCard
Best Achivement : Played a critical role for Accenture in winning the mastercard contract by creating a technical POC, then once Accenture got the MasterCard Contract, worked only the below MasterCard Project.
Project:- Customer Parameter Enablement
Description:- Automate the business onboarding process of banks, processors and other financial entities on MasterCard network by developing a web platform which helps the banks and other firms in faster onboarding by self understanding and filling of all required documents (like Durbin agreement, EMV document) for new accounts or new financial institutions can be onboarded, so that they can perform transactions.
Role's & Responsibility:- Everything from scratch, from development to configurating the cloud for deployment of new microservices. alot of Spring Boot integration POCs which turned out to be implemented in the project eventually from mongoDB, SQL, H2, Camunda, AngularJS, Caching like Redis Cache, Configuration of Mongo Atlas Cloud, Lombok, JPA, Hibernate, Spring Security, Spring Cloud, Test Driven Development,PCF Cloud, AWS cloud, multi-cloud module with even Google OAuth2 based authentication etc.
​
Client: Goldman Sachs
Project Name : - Trader Mandate
Description : - The Mandate application provides various business user’s with capability to view ,review, edit and approve Mandates. Mandates are documents that prescribe restrictions on the trading activity of desk, by outlining such things as who can trade, what they can trade and for which xyz entities they can book trades.
The applications, workflow and application reporting (mandate PDFs) also serve to document and provide controls and evidence of controls to ensure the firm is acting in accordance with regulatory requirements(Volcker Rule).
Role & Responsibility:- Development, Enhancement, Maintenance, Deployment.
​
EDUCATION
2013-2017
Bachelor of Technology - BTech
University of Mumbai
Engineering in field of Information Technology from Atharva College of Engineering.
SKILLS

Microservice Design Patterns
Advance Java
DevOps
Spring Cloud
Software and Business Product Development
REST API's and Web Services
SQL and NoSQL Databases
AWS & GCP
Spring Boot
Cloud Computing
Software Project Management
Enterprise Architectures
Coaching and Mentoring
Docker, K8 & Helm
EXPERTISE
I'M AM Open Source Contributor
I'M Technical Architect | Enterprise Architect | Cloud Architect | Microservice Architect | Product Architect
I'M A Course Publisher | Course Instructor | Career Mentor
I have contributed in more than 100 open source public projects
I have worked on multiple projects and multiple cutting edge technologies (Open Source and Proprietary) and successfully delivered and managed the production releases and project resources. I have also completed many online certification courses, complete list of my certifications are available on my LinkedIn Profile
I have more taught more than 40000 students from all around the globe through different famous teaching companies with different learning models.