AI Enabled Task Automation Technology Trends in the IT Industry

chopramanish 7 views 12 slides Oct 22, 2025
Slide 1
Slide 1 of 12
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12

About This Presentation

The AI revolution the world over has brought forth several new software technologies to automate tasks at scale. This article is an effort to explore and present the tools and technologies used for task automation by leading IT companies. It is a detailed survey of what these tech companies are doin...


Slide Content

AI Enabled Task Automation Technology Trends in the IT Industry
Contents
AI Enabled Task Automation Technology Trends in the IT Industry ................................................................................. 1
Introduction ................................................................................................................................................................ 2
Percentage of engineering workload automated or assisted by AI ............................................................................. 2
AI Automation Tools & Adoption in Major Tech Companies ....................................................................................... 2
Examples of Use-Case Automation ......................................................................................................................... 4
Deeper Look: Methods & Supporting Technologies ................................................................................................ 4
Controlled Studies with Quantitative Findings ........................................................................................................ 5
Other Studies of AI Tools (Quality / Usage) — Less Direct on percentage of Code Generated ................................ 6
Quantitative data about impact of AI adoption in large tech enterprises ................................................................... 6
Breakdown of “25% tasks by AI” at Google ................................................................................................................. 7
Percentage of tasks each tool contributes .............................................................................................................. 9
Google’s AI automation Pie chart .......................................................................................................................... 10
Comparison: Google vs Microsoft vs others .......................................................................................................... 11
What to Watch in 2025-2026 .................................................................................................................................... 12
Recommendations: What Should IT Organisations Do .............................................................................................. 12
Conclusion ................................................................................................................................................................ 12

Introduction
The AI revolution the world over has brought forth several new software technologies to automate tasks at scale.
Almost all the companies in the IT industry are dealing with the challenge of automating their tasks wherever
possible. This article is an effort to explore and present the tools and technologies used for task automation by
leading IT companies. It is a detailed survey of what these tech companies are doing with AI and automation in their
engineering, operations, and workflow tools, i.e. which tools and internal products they use, tasks they automate,
and what technologies empower those systems.

Percentage of engineering workload automated or assisted by AI
Here are some public-data-based estimates of how much code or engineering work is now being done by AI in
leading tech companies. These numbers come from CEO statements, interviews and media reports. They should be
treated as approximations.
Company Reported Estimate What It Covers / Notes
Microsoft ~ 20–30% of its code is
AI-generated.
Satya Nadella said around 20-30% of the code in Microsoft’s repos is
written by AI, and that this will increase. Some parts are more AI-generated
than others.
Google over 25% of new code is
AI-generated.
This refers to new code. Sundar Pichai has said “over 25%” of Google’s new
code is generated by AI.
Robinhood ~ 50% of new code is
AI-generated.
Vlad Tenev (CEO) said close to 100% of engineers are using AI code editors,
and about half of new code is generated by AI. He admitted precise
separation between human vs AI code is murky.
Anthropic 90% of code in many teams is
now written by AI
Anthropic CEO said that 90% of code in many teams is now written by AI
(though with human oversight) in some contexts.
Other
companies
Various / less specific numbers For example, some reports say Meta expects 50% of development to be
AI-done in the next year or so. But exact current percentages are less well
documented.

Table: Estimates of AI Code / Automation Adoption
Gaps & Caveats
• Definition : What counts as “AI generated code”? Is it a full function/module auto-written, or just
autocomplete / snippet / paste suggestions? Often companies don’t clarify.
• Human supervision: Nearly all cases involve human review, editing, or acceptance of AI proposals. So
“AI-generated” often means “AI-assisted and human-verified”.
• Variation by project & team: The share of AI usage differs widely by type of code (library vs UI vs
infrastructure), seniority, language, etc. What works well in some teams may not in others.
• Lack of public datasets with fine-grained metrics: Very few studies provide breakdowns (e.g. of lines of
code, percentage of features, etc.) for many large companies.
• Bias toward measuring junior / simpler tasks: Many experiments show larger gains for junior or entry-level
engineers; for senior, integrative, architectural work the gains are smaller or more complex.

AI Automation Tools & Adoption in Major Tech Companies
Here is a comparison of tools and practices from major tech companies, showing what internal tools they use, stage
of adoption (internal / external / rollout status), and what kind of task automation they enable. Some of the info is
from public discussion, research papers, etc.

Company Known / Reported Tools / Systems Adoption Stage
(Internal / External
/ Mixed)
What Tasks or Flows Are Automated /
Supported
Google /
Alphabet
Goose: An internal AI model trained on many
years of Google engineering data. It helps
engineers write code.
Smart Paste: suggests edits after code is pasted
Gemini Code Assist: Tool that helps with code
generation, suggestions, completions etc.
Jules: (for PRs & bug fixes)
AI Playbooks: Use of agentic workflows,
model-fine-tuning
Mixed: Google’s
internal systems are
proprietary, and
there is only partial
visibility
Automating code generation, code
completion, code edits, style & naming
consistency; reducing repetitive tasks
after code paste; generating &
reviewing pull requests; reproducing
bugs; test generation; assisting
developers with internal docs &
lookups.
Also helping with workflows in
terminals and cross-application
automation (via CLI).
Facebook
/ Meta
CodeCompose (built on InCoder + fine-tuned on
Meta’s internal code)
CodemodService + SCARF for large-scale
automated code changes (formatting, removing
dead code, API migrations, etc.)
Code Llama (public / community LLM for code
tasks)
Mixed: Some tools
are internal
(CodeCompose,
Codemod), others
are public or
community
available (Code
Llama)
• Inline code completion & suggestion
(single-line & multi-line) in IDEs
• Automated code refactoring and
code cleanup
• Style / formatting consistency
• Test coverage improvements
• Helping with boilerplate tasks and
discovering APIs more easily
Microsoft
/ GitHub
• AI-powered code review assistant (internal)
supporting ~90% of pull requests, catching
issues, enforcing best practices.
• Microsoft 365 Copilot (used internally by
employees globally) for document, calendar,
communication-workflow automation etc.
• Tools / features in Azure, Teams, etc., to
integrate AI in dev workflows.
• Open source / internal toolkits like NNI
(Neural Network Intelligence) for AutoML tasks
like feature engineering
Mixed / Large-scale
internal + Customer
/ External use:
many of these tools
are used internally
first; some are
productized
(Copilot, Code
reviews).
• Catching bugs earlier via automated
code review
• Enforcing consistency, best practices
• Productivity improvements for
knowledge workers (document
generation, summarization, etc.)
• Automating test generation or
suggestions
• Automating model development
tasks (AutoML)
• Reducing latency for dev feedback
loops
Amazon /
AWS
• Amazon Q (generative assistant for
developers & business users) – does code gen,
debugging, planning, reasoning, summarization
over internal data
• Amazon Q Developer for SDLC automation,
suggestions in IDE, refactoring, documenting,
code reviews etc.
• AWS CodeWhisperer for pair programming
style autocomplete/function generation etc.
• Internal project “Kiro” reportedly under
development to improve on or augment these
tools.
• Internal chatbot “Cedric” handling internal
queries / info across company resources.
Mixed; many tools
are external or
made available to
customers
(CodeWhisperer, Q
Developer), others
are internal or in
pilot state (Kiro,
Cedric).
• Real-time code autocomplete /
snippet generation
• Suggesting or auto-writing full
functions or features based on
prompts
• Debugging / test suggestion /
vulnerability detection
• Helping developers understand
internal codebases
• Non-dev business user tasks
(dashboarding, summarization over
internal data)
Others /
General
Practices
• Many companies build or use AI review
assistants (e.g. Microsoft’s code review tool)
• Internal tools for automated refactoring /
code cleanup at scale (like Meta’s Codemod)
• Use of third-party coding agents or assistants
(Copilot, etc.) where allowed
• Use of AutoML and feature engineering tools
within ML teams
• Workflow automation using agents, chatbots
for business tasks (reports, docs, internal
search)
Depends: smaller
firms may use
off-the-shelf tools;
larger ones build
internal tools or
customize external
ones. Adoption
level & safety
guardrails vary
widely.
• Automating boilerplate and common
code patterns
• Reducing repetitive refactoring
• Improving code quality
• Speeding up PR workflows
• Knowledge retrieval and
documentation
• Non-technical tasks (report creation,
schedules, etc.)

Table: AI Automation Tools & Adoption in Major Tech Companies

Examples of Use-Case Automation
Here are more concrete automation tasks being handled by AI in these companies:
• Auto-completion and boilerplate generation (e.g. scaffold modules, setup code)
• Generating documentation from code or from design specs
• Suggesting bug fixes, code refactoring
• Pull request generation / diffs summarization
• Test generation (unit/integration)
• Code review assistance (flagging issues, suggesting edits)
• Workflow automations: generating reports, reconciling invoices or financial data, handling repetitive admin
work (emails / documents)
• Monitoring / error detection / alert triaging
• Impact varies heavily by task type, language, seniority of developers, domain (e.g. UI vs backend vs
infrastructure).
• Definition of “code generated by AI” can include small suggestions or snippets (autocomplete) which may
inflate counts if not carefully defined.

Deeper Look: Methods & Supporting Technologies
Here are recurring technological patterns / architectural choices that enable these tools to work well, and what
makes them effective.
1. Large Language Models (LLMs) fine-tuned on internal code / corporate codebases
o Companies gather their own code + internal repo histories to fine-tune general models (e.g.
Llama2-based models at Meta, internal models at Google).
o Fine-tuning includes tasks like code completion, infilling, bug detection, style consistency.
2. Model Variants & Specialization
o Having variants by size (smaller, faster models for live autocomplete vs larger models for deeper
suggestions).
o Specialization by language (e.g. Python-specific, JS/TS) or by instruction format (“instruct” models).
o Sometimes by task (e.g. unit test generation, debugging).
3. Multi-line / Multi-file Suggestions
o Rather than just one line suggestions, tools are pushing multi-line suggestions, refactoring spanning
files, structural edits.
o This raises complexity in tracking diff, merging suggestions, and ensuring correctness.
4. Static Analysis + Tooling Integration
o Code checkers, static analyzers, linters, and tools like Facebook’s Infer catch bugs, analyze resource
usage, etc. Integration of AI to suggest in these contexts.
o Use of code structure, dependency graphs, and type information to inform suggestions.
5. Latency & Model Serving Optimizations

o Users expect fast response especially while typing / editing; hence companies build infrastructure to
reduce inference time, cache common suggestions, use smaller models, or hybrid models.
o Also large context windows so that the model can “see” more of the project and understand context
(dependencies, file structure etc.).
6. Agents & Workflow Orchestration
o Agents that can perform multi-step tasks: e.g. fetch data from multiple sources, cross-reference,
produce a summary, send results, or trigger downstream actions.
o Use of connectors & APIs to integrate with internal tools (ticketing systems, codebase, document
storage, communication tools).
7. Human-In-The-Loop / Review Process
o Even in highly automated systems, human review is used for safety, correctness, style, security.
o Tools often generate suggestions rather than directly committing code; engineers decide what to
accept.
o Feedback from users is used to refine models / tooling.
8. Monitoring, Metrics, Safety & Guardrails
o Measuring usage: how many suggestions accepted; how much code produced; error rates; bug
reintroduction.
o Using static checks to guard against insecure code.
o Security / privacy considerations: internal codebase doesn’t leak; ownership; licensing.
Controlled Studies with Quantitative Findings
Here is a summary of peer-reviewed, controlled-trial studies that provide quantitative data on how much code /
engineering tasks are being assisted or generated by AI — plus methodological notes and gaps. There aren’t many
studies that give company-scale cross-company comparisons, but some give useful windows.
Study Sample / Subjects What Was Measured Key Quantitative Results
Generative AI and labour
productivity: a field
experiment on coding
(Ant Group / CodeFuse)
Programmers in Ant Group;
treatment vs control groups, junior
and senior programmers
Productivity (number
of lines of code) with
vs without AI tool
(CodeFuse)
Productivity increased by ~55%
for AI users; about one-third of
that increase was directly
attributable to code generated
by the AI (rest due to efficiency
gains)
How much does AI
impact development
speed? An
enterprise-based
randomized controlled
trial
96 full-time Google software
engineers working on a complex
task; three AI features vs non-AI
baseline
Time taken for task;
speed / throughput
improvements
AI reduced time on tasks by
about 21% on this kind of
enterprise-grade work
(controlling for known factors)
Google AI-assisted
Assessment of Coding
Practices in Industrial
Code Review
Deployed system called
“AutoCommenter” that uses LLMs
to learn & enforce best practices
(style, adapted to internal practices)
in code reviews across many
developers and codebases
Measure of adoption,
correctness of
detecting violations /
improvements,
workflow impact
The study shows it is feasible;
measurable positive impact on
developer workflow. But no
public number saying “X% of
code is AI-generated / AI
assisted” from this study.

Table: Peer / Controlled Studies with Quantitative Findings

Other Studies of AI Tools (Quality / Usage) — Less Direct on percentage of Code Generated
These don’t give exact percentages of code-generation by AI in production for companies, but help understand what
tools can do, how good they are, and what limitations exist.
• Evaluating the Code Quality of AI-Assisted Code Generation Tools: GitHub Copilot, CodeWhisperer,
ChatGPT — measures correctness, reliability, maintainability etc. Doesn’t quantify production usage, but
gives idea of how much of the generated code is usable.
• Security Weaknesses of Copilot-Generated Code — looks at proportion of snippets with security issues;
again not directly measuring “% code in repo written by AI” but quality concerns when AI-assisted.
• Assessing AI-Based Code Assistants in Method Generation Tasks (Copilot, Tabnine, ChatGPT, Bard) —
compares tool performance on standard benchmark tasks.

Quantitative data about impact of AI adoption in large tech enterprises
Here are a few recent working papers of 2025 that are focused on large companies (Google, Microsoft, Meta), to give
quantitative or semi-quantitative data about AI’s impact in large tech / enterprise settings — plus what they say, and
what gaps remain. These are among the better sources as of now.
Recent Working Papers & Studies
Title / Authors What It Measured Key Quantitative Findings
“The Effects of Generative
AI on High-Skilled Work:
Evidence from Three Field
Experiments with Software
Developers” (Zheyuan
Kevin Cui, Mert Demirer,
Sonia Jaffe, Leon Musolff,
Sida Peng, Tobias Salz)
Randomized controlled trials (RCTs) at
Microsoft, Accenture, and a
Fortune-100 company. Developers
given access to an AI coding assistant
(code completions) vs control group
without. ~4,867 developers involved.
~26.1% increase in completed tasks by developers
using the AI tool. Less experienced developers saw
larger gains.
“Intuition to Evidence:
Measuring AI’s True Impact
on Developer Productivity”
(Anand Kumar et al.,
“DeputyDev” platform)
Longitudinal deployment of an internal
AI platform combining code generation
+ automated review; tracked adoption,
usage, code shipped, PR cycle times
over ~1 year across 300 engineers.
- ~31.8% reduction in PR review cycle time for
engineers using the tool.
- Engineers who used it heavily shipped ~61% more
code to production.
- Over time, ~30-40% of code shipped was via this
tool (AI-assisted/generated) when usage peaked.
- Overall code shipment volume increased ~28%.
“Paradigm Shift on Coding
Productivity Using GenAI”
(Liang Yu, 2025)
Survey / interviews in industrial
domains (telecom, FinTech) about
adoption of GenAI tools like Codeium,
Amazon Q etc.; trying to understand
what kinds of tasks benefit and what
limits there are.
Findings include: GenAI tools significantly help with
routine tasks (refactoring, documentation,
boilerplate) but less so for deeply domain-specific /
complex design work. Increases in speed and
efficiency reported; but not full quantitative % of
code in production vs human vs AI.
“Human-Written vs.
AI-Generated Code: A
Large-Scale Study of
Defects, Vulnerabilities, and
Complexity”
Compared large sample (500k+ code
samples) in Python and Java, human vs
AI-generated code (from several LLMs)
along dimensions of defect rates,
vulnerability, complexity.
AI-generated code tends to be simpler, more
repetitive, but more likely to have hard-coded
debugging constructs, unused constructs; also
more high-risk vulnerabilities in some cases.
Quantifies quality trade-offs; but doesn’t say what
fraction of internal enterprise code is AI-generated.

Table: Recent Working Papers & Studies

Gaps & Open Questions
• Many of these reports / statements do not clearly define what “code generated by AI” means: lines vs
modules vs suggestions vs small edits; whether “generated” means accepted as is, or heavily edited. That
makes comparison difficult.
• Not many peer-reviewed works yet that cover very large scale enterprise codebases over long periods, with
detailed breakdowns by project type, seniority, code criticality, etc.
• Quality, security, and maintainability trade-offs are still under-researched: e.g. what kinds of bugs or
vulnerabilities tend to be introduced in AI-generated code, under what conditions, and how those impact
long-term maintenance effort.
• Also, adoption / usage vs impact: many tools are adopted, but tracking how much of the code shipped
actually comes from AI (versus simply assisted) is harder.

Breakdown of “25% tasks by AI” at Google
Here’s a breakdown of what “25% tasks by AI” refers to at Google, and the AI tools/applications they use to automate
those tasks.
• Sundar Pichai (Google’s CEO) said that over 25% of Google’s new code is now being generated by AI, then
reviewed and accepted by engineers.
• More recently, that figure has been reported as ~30% in some internal estimates — again referring to new
code produced with AI assistance.
So 25% of new code writing is AI-generated, but with human verification.
Here is a more detailed view of AI / automation tools and techniques that Google is believed to use to automate
engineering, testing, debugging, and routine workflows.
Known & documented AI / automation tools at Google
1. Goose (internal AI coding assistant)
o This is a heavily reported internal model trained on ~25 years of Google’s engineering data, used by
engineers to generate code, answer questions about internal tech stacks, and assist with code edits.
o It is part of Google’s push to have engineers adopt internal models and reduce reliance on third-
party AI tools.
o It’s often described as a variant or fine-tuned relative of Google’s public LLMs (Gemini, etc.) adapted
for internal use.
2. Smart Paste (post-paste editing / suggestion system)
o Google published a paper on Smart Paste, an IDE feature that monitors pasted code and proposes
corrections (e.g. renaming variables, formatting, style alignment, or cross-language translation).
o According to Google’s internal metrics, Smart Paste suggestions are accepted ~45% of the time.
o The authors mention that among all code written at Google, the accepted Smart Paste suggestions
already constitute over 1% of it.
3. Agent-driven program repair / bug reproduction (BRT Agent & APR integration)
o In research, Google researchers describe an agentic bug reproduction + repair pipeline. The “BRT
Agent” (a fine-tuned LLM-based agent) takes bug reports, generates test cases to reproduce the bug
(Bug Reproduction Tests), and feeds them into an Automated Program Repair (APR) system.

o This approach improved the bug-fixing success rate by ~30% in Google’s internal trials of large-scale
proprietary code.
4. Hyperparameter tuning / optimization infrastructure — Vizier
o Google uses Vizier, an internal black-box optimization platform (open sourced as OSS Vizier) to tune
hyperparameters in ML systems, experiment management, and model selection.
o This is a more “infrastructure-level AI/automation” rather than user-facing, but it automates many
parts of ML workflows.
5. Project Mariner (web-automation / agent experiment)
o Google’s DeepMind is developing Project Mariner, an AI agent prototype that can operate within web
browsers: interpret UI, fill forms, navigate websites, and execute multi-step workflows.
o It is intended to become part of Gemini / agent APIs to let users/developers build more agentic
workflows.
6. Jules (coding agent for bug fixes & pull requests)
o Google announced Jules, an AI coding assistant that targets bug fixes and automated PR generation
for Python & JavaScript in GitHub workflows.
o Jules can plan edits across multiple files, implement fixes, and present pull requests for human
review (i.e. it doesn’t directly commit changes without oversight).
o It is described as part of Google’s efforts to offload time-consuming tasks so engineers can focus on
higher-level development.
7. “Scheduled Actions” in Gemini
o For more user-productivity tasks, Gemini supports a feature called Scheduled Actions, which can
automate recurring actions like summarizing calendar, email, creative writing, etc.
8. AI in HR / workplace productivity via Gemini in Workspace
o Internally and in its Workspace products, Google integrates Gemini to assist in HR-related tasks: job
descriptions, onboarding checklists, policy drafts, summarization, etc.
o While this is more a public-facing feature, it suggests similar internal usage for administrative
automation.
9. AI Playbook / internal guidelines & enforcement
o Google has published an internal “AI playbook” for engineers that gives guidance on how to use AI
tools, what rules to follow, safety practices, review process, etc.
o Also, Google has mandated or strongly encouraged use of internal AI models for engineering tasks; in
some reports, performance expectations now include AI usage.
10. Internal “Cider” platform / AI agent orchestration (reportedly)
o Some sources suggest Google has an internal development platform called Cider that hosts internal
AI agents, running internal models (e.g. “Gemini for Google”) for coding and tooling purposes.
o The idea is that Cider would orchestrate how different AI models or agents are used in the
development workflow.

How these tie into the “~25% AI-automated code / task” claim
• The 25%+ figure refers to new code generated or assisted by AI, which is then reviewed / accepted by
engineers.
• Many of the above systems contribute in specialized ways:
o Goose helps with code generation, edits, and queries.
o Smart Paste takes care of post-paste corrections and style fixes.
o Jules automates parts of bug repair workflows.
o BRT/agentic repair helps reduce human labor in debugging.
• Combined, these automation systems offload many routine, repetitive, or boilerplate programming tasks —
allowing human engineers to focus on complex design, architecture, product work, and oversight.

Percentage of tasks each tool contributes
Here is an estimate of how much each of Google’s known AI tools contribute to the ~25% of engineering tasks that
are AI-automated, based on available data, papers, and reasonable assumptions.
Note: These are estimates, not official numbers. Google hasn’t published a breakdown, but the data suggests some
tools play larger roles than others.
Tool / AI System Estimated
Contribution (%)
Role
Goose (AI coding assistant) ~12–15% Primary contributor; used widely by Google engineers to
generate/complete code, answer questions about internal APIs,
suggest tests, etc.
Smart Paste ~1–2% Automates fixes after pasting code. A published paper says its
accepted suggestions now account for ~1% of all code.
Jules (AI code agent for PRs) ~2–3% Used for automating bug fixes, generating pull requests, and
cross-file changes. Still rolling out; important for repetitive tasks.
Bug Reproduction + Repair Agents
(BRT/APR)
~1–2% Accelerates debugging by reproducing bugs and proposing fixes
with AI assistance. Useful in large internal projects.
Gemini Code Assist / IDE integration ~3–4% Helps with live code completions, refactors, suggestions. Think
of it like a Copilot-lite tool. Likely merged with Goose or used in
lower-code tasks.
Internal LLM queries for
documentation/help
~1% Used to summarize or interpret internal APIs, codebases, or
documentation — freeing engineers from repetitive lookups.
AI-based test generation / coverage
tools
~0.5–1% AI tools that auto-generate unit/integration tests or detect gaps
in test coverage. Used internally in some divisions.
AI in CI/CD pipelines (e.g., code
analysis, error pattern detection)
~0.5–1% Used to suggest fixes before commits are merged, or to detect
common failure patterns.
AI in code review (suggestions / diff
explanations)
<1% AI assists reviewers by explaining code diffs, summarizing PRs, or
flagging anomalies. Emerging use.

Table: Estimated Contribution to Google’s “AI-automated 25% Tasks”

Total: ~25–30% of Engineering Tasks
• These figures sum to ~25–30%, depending on adoption rates, team-specific usage, and feature maturity.
• Google internally reported that up to 30% of new code was AI-assisted as of mid-2025.
Key Highlights
• Goose is the dominant contributor — it’s deeply integrated into Google’s IDEs and workflows.
• Tools like Smart Paste, Jules, and BRT agents reduce effort on small, repetitive, or annoying tasks.
• The rest are marginal but growing — such as AI-assisted test coverage and code review.
• Google treats AI automation as an engineering productivity enhancer, not a full replacement.

Google’s AI automation Pie chart
Here’s a visual chart and a refined breakdown comparing Google’s AI-automation adoption with what is visible about
Microsoft and others. Note that these are approximations, not official data.
Given below is a pie distribution of how different AI tools might divide up that ~25%.
If you converted that to a pie or donut chart, Goose + code generation would take the largest slice (maybe ~50–60%
of the AI-portion), with the others filling in the rest.

Conceptual pie chart: Google’s AI-automated engineering tasks (~25%)

Comparison: Google vs Microsoft vs others
Here’s how Google stacks (publicly) against peers in terms of AI automation in coding / engineering workflows:
Company / Org Report / Claim What They Automate / Tools Notes & Uncertainties
Google “Over 25% of new
code is
AI-generated”
Goose (internal), Smart Paste (post-paste
fixes), Jules agent for bug fixes / PRs,
internal bug reproduction/repair pipelines,
IDE assistance, test generation, LLM for docs
& queries
The public disclosures don’t break
down exact % by tool. Also,
“AI-generated” includes many
assisted / hybrid workflows
(engineer oversight).
Microsoft /
GitHub /
Copilot
ecosystem
GitHub Copilot is
widely used
Code completions, suggested tests, AI in
Azure DevOps pipelines, code analysis tools
(e.g. IntelliCode)
Microsoft invests heavily in AI
assisted development
Other
companies /
startups
Some claim high AI
usage in code (e.g.
Robinhood says
~50%)
Similar stack: autocomplete, agentic code
tools, internal LLMs, AI testing
These claims tend to be less
verifiable. The scale / infrastructure
is often smaller, so the effect may be
concentrated on boilerplate rather
than full features.

Table: Comparison of AI Automation between Google vs Microsoft vs others
A bar chart comparing AI adoption in Google vs Microsoft vs others in terms of AI adoption is given below.

What the chart shows:
• Google: ~25% of engineering tasks are AI-assisted (via Goose, Jules, etc.)
• Microsoft: ~20% (GitHub Copilot + DevOps integrations — unofficial estimate)
• Robinhood: ~50% (as per recent reports)
• Average tech startup: ~30% (varies widely by stack and size)

What to Watch in 2025-2026
• Increased uptake of agentic AI (autonomous agents that not only assist but act) in IT operations and
infrastructure.
• Greater convergence of AI, edge/IoT, hybrid cloud — automation will extend out of the data-centre to edge
devices and field operations.
• More business-user driven automation (democratised automation) via no-/low-code platforms, which will
shift the dynamics of IT vs business collaboration.
• Focus on explainability, auditability and governance of automation – regulatory and internal risk concerns
will increase.
• The maturation of “automation platforms” offering integrated stacks (RPA + AI + orchestration) rather than
point solutions.
• Growth of intelligent document processing, knowledge-work automation (e.g., automating support ticketing,
contract review, etc).
• Skills and organisational transformation will become a big part of the automation initiatives — not just
technology.
• In Asia, automation will start making a visible dent in workforce growth trends in the IT sector.
Recommendations: What Should IT Organisations Do
• Start with high-value use-cases: identify tasks that are repetitive, rule-based, high-volume, error-prone.
• Ensure you have the data & process maturity: cleaning processes, defining workflows, collecting metrics.
• Select technology vendors/platforms that support end-to-end automation and integrate well.
• Build governance: define KPIs, monitor automation performance, set roles/responsibilities, manage risk.
• Focus on the workforce: reskill employees, create automation-ops roles, encourage collaboration between IT,
business and citizen developers.
• Monitor and manage change: communicate clearly, involve users, and redesign processes
• Keep human-in-the-loop where needed: especially for exceptions, strategy, judgement calls.
• Think scalability from the start: avoid “one off” automation islands; aim for platforms that can scale.
• Measure early — track impact on cost, error-rate, throughput, employee satisfaction, customer outcomes.
Conclusion
AI-driven task automation is rapidly transforming the IT industry, reshaping how organizations operate, innovate, and
deliver value. As AI technologies mature, they are increasingly being used to automate routine, repetitive, and time-
consuming tasks—ranging from infrastructure management and software testing to customer support and
cybersecurity. This shift is not only improving efficiency and reducing operational costs but also freeing up IT
professionals to focus on more strategic, creative, and high-impact work.
Emerging trends such as HyperAutomation, AI Ops, and low-code/no-code development are accelerating adoption,
making automation more accessible across all levels of IT. At the same time, organizations must address challenges
related to workforce upskilling, ethical AI use, data privacy, and system integration to fully realize the benefits. As AI
becomes more embedded in IT processes, its role will continue to evolve from a support tool to a central component
of IT strategy, driving innovation and competitiveness in the digital age.
To summarise, AI task automation is not just a trend. It is a critical enabler of the future IT landscape, pushing the
boundaries of what’s possible and redefining the role of technology in business.