Tanmoy has got 16 years of IT experience, majorly working in automation roles, including tools like Selenium, RestAssured, Playwright, Cypress, etc. He has recently started contributing to UI development using React and Next.js. His infra knowledge is found to be adequate for automation roles with experience in Docker, Kubernetes, Linux, Unix, Jenkins, etc. He has also worked on multiple POCs, with the latest on GenAI agent implementation on the latest project Tanmoy has 16 years experience Tanmoy has got 16 years of IT experience, 7 years in AVP role in JP Morgan & current tenure at CBA as AI Automation architect and majorly working in automation roles, including tools like Selenium, RestAssured, Playwright, Cypress, etc. He has recently started contributing to UI development using React and Next.js. His infra knowledge is found to be adequate for automation roles with experience in Docker, Kubernetes, Linux, Unix, Jenkins, etc. He has also worked on multiple POCs, with the latest on GenAI agent implementation on the latest project
Ideas and innovation Situation âIdentify the gap , propose solution or technical direction to solve a problem statement in the project, how he presented idea , what data did he use to support it and what was outcome ?
Current project Integrating AI tools into your end-to-end (E2E) test automation architecture addresses the challenge of flaky canaries caused by frequently changing credit card rules by automating the complex process of rule comparison, test generation, and maintenance. Here is a strategic approach to integrate AI for rectifying flaky canaries:
Automating Rule Comprehension and Test Case Generation The core of the problem is the manual effort required to understand and translate new rules into test cases. AI can streamline this process: ⢠Natural Language Processing (NLP) for Rule Ingestion: o Feed the documentation (PDFs, confluence pages, specification documents) containing the new credit card rules into an NLP engine [1]. o The NLP model can parse, understand, and extract key parameters (e.g., "minimum credit score required," "maximum transaction limit for new users," "fraud detection triggers"). ⢠AI-Powered Test Case Generation: o Use a Large Language Model (LLM) or a specialized AI agent fine-tuned on testing principles to translate the extracted parameters into structured, executable test scenarios (e.g., in Gherkin format or a simple data table) [1]. o This automates the creation of diverse test cases, including positive, negative, and edge cases, ensuring comprehensive coverage without manual input [1].
Intelligent Maintenance and Self-Healing Tests Frequent changes mean test maintenance is a major bottleneck. AI can make your tests resilient to these changes: ⢠Visual Regression and UI/API Adaptability: o Integrate AI-powered visual testing tools (like Applitools or Percy) to detect unintended UI changes and automatically adjust selectors or flag discrepancies, reducing flakiness caused by minor UI shifts [2]. o For API-level tests, AI can analyze API schema changes and automatically suggest updates to payloads or endpoints, minimizing manual script rewriting. ⢠AI-Driven Root Cause Analysis (RCA): o When a canary test fails, an AI diagnostic tool can analyze logs, performance metrics, and recent code changes to pinpoint the likely cause. Instead of simply reporting a failure, it provides actionable insights, reducing the Mean Time to Resolution (MTTR) for flaky tests [1].
Predictive Flakiness Detection and Prevention Move from reactive maintenance to proactive prevention: ⢠Machine Learning (ML) for Anomaly Detection: o Train an ML model on historical test execution data (pass rates, execution times, environment variables). The model learns patterns associated with flakiness and can flag tests that are likely to fail in the next run, allowing engineers to investigate before deployment [1]. ⢠Environment Intelligence: o Use AI to monitor the testing environment's stability. If AI detects resource contention, network latency spikes, or configuration drift, it can pause test execution or flag results as unreliable due to environment issues, preventing false failures.
Integration into the E2E Architecture This is how these components fit into the existing architecture:
Rule Input: New rules are ingested via the NLP engine when provided by banks [1].
AI Layer: The AI processes the rules, generates test data and scenarios, and interfaces with existing automation frameworks (e.g., Selenium, Cypress, Rest Assured) [1].
Execution and Monitoring: Canaries run as usual within the CI/CD pipeline.
Feedback Loop: AI monitoring tools watch for flakiness, perform RCA on failures, and feed insights back to developers and the test generation engine for continuous improvement
Execution , Innovation , Project understanding Pros He walked through his transition from Automation engineer , SDET into AI automation architect role based on h
