IObit Advanced SystemCare Pro Crack Download (Latest 2025)

100 views 105 slides Apr 17, 2025
Slide 1
Slide 1 of 105
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105

About This Presentation

🌍📱👉COPY LINK & PASTE ON GOOGLE https://allpcsoft.net/setup-download/
There might be lots of trouble with most old and slow PCs to slow them down. Equipped with Advanced SystemCare, your PC will stay error-free and smoother. IObit Advanced SystemCare has the New Performance Monitor to re...


Slide Content

Beyond Autocomplete:
Local AI Code Completion Demystified


Daniel Savenkov, Senior ML Engineer at JetBrains

AI Code Completion 101

Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101

Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101
Local AI Code Completion by JetBrains: Details

Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101
Local AI Code Completion by JetBrains: Details
Local AI Code Completion by JetBrains: Team

Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101
Local AI Code Completion by JetBrains: Details
Local AI Code Completion by JetBrains: Team

Needs Cloud to operate
Typical AI Code Completion
Use Large Language Model

Needs Cloud to operate
Typical AI Code Completion
Use Large Language Model
User Typing Code Smart Suggestions
Large Language Model
in Cloud

Typical AI Code Completion
Can use really big models
Pros:
User Typing Code Smart Suggestions
Large Language Model
in Cloud

Typical AI Code Completion
Can use really big models
Pros:
No additional computations on
user side
User Typing Code Smart Suggestions
Large Language Model
in Cloud

Typical AI Code Completion
Can use really big models
Pros:
No additional computations on
user side
Cons:
Code is sent to the Internet
(poor security, poor latency)
User Typing Code Smart Suggestions
Large Language Model
in Cloud

Typical AI Code Completion
Can use really big models
Pros:
No additional computations on
user side
Cons:
Code is sent to the Internet
(poor security, poor latency)
Someone has to pay for the cloud
(either users or the company)
User Typing Code Smart Suggestions
Large Language Model
in Cloud

Can we go local?

Cmooon
You need me
Can we go local?

Ok ok
I’ll go away
Can we go local?

Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101
Local AI Code Completion by JetBrains: Details
Local AI Code Completion by JetBrains: Team

Full Line Code Completion by JetBrains

Full Line Code Completion by JetBrains
Local AI

Full Line Code Completion by JetBrains
Local AI
Uses tiny language model that runs on your laptop

Full Line Code Completion by JetBrains
Local AI
Uses tiny language model that runs on your laptop
Available out of the box in JetBrains IDEs

Full Line Code Completion by JetBrains
Local AI
Uses tiny language model that runs on your laptop
Available out of the box in JetBrains IDEs
Saves a vast amount of typing

Full Line Code Completion by JetBrains
Local AI
Uses tiny language model that runs on your laptop
Available out of the box in JetBrains IDEs
Saves a vast amount of typing
Checks semantic correctness of the generated code

Full Line Code Completion by JetBrains
Local AI
Uses tiny language model that runs on your laptop
Available out of the box in JetBrains IDEs
Saves a vast amount of typing
Checks semantic correctness of the generated code
Does not require any additional subsciption

Full Line Code Completion by JetBrains: feedback

Full Line Code Completion by JetBrains: feedback

Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101
Local AI Code Completion by JetBrains: Details
Local AI Code Completion by JetBrains: Team

User Typing Code Smart Suggestions
Language Model

Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model

Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model

Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model

Measuring the code completion

How do we make sure that adding Full Line Completion
improves user experience?

JetBrains Early Access Program
AB-testing
Full Line enabled Full Line disabled

+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users
AB-testing

+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users
How it usually happens

+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users
How it happens at JetBrains

Fraction of code generated
AB-testing
+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users

Fraction of code generated
Explicit cancel rate
AB-testing
+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users

Fraction of code generated
Deletion after accepting rate
Explicit cancel rate
AB-testing
+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users

IDE performance degradation
AB-testing
Fraction of code generated
Deletion after accepting rate
Explicit cancel rate
+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users

Full Line Code Completion AB-sets results
Fraction of code
generated
+ 30-100%

Full Line Code Completion AB-sets results
Fraction of code
generated
+ 30-100%
Explicit
cancel rate
didn’t change
Deletion after
accepting rate
IDE
performance
didn’t change
didn’t change

Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model

Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model

Annoying suggestions: invalid code

Suggestions postprocessing: invalid code filtering
Suggestion
Postprocessing
Smart Suggestions
We show suggestion only after it was
checked by IDE correctness checks

Suggestions postprocessing: invalid code filtering
Suggestion
Postprocessing
Smart Suggestions
We show suggestion only after it was
checked by IDE correctness checks
User never sees semantically
incorrect code (wrong API calls,
non-existent variables, etc)
Pros:

Suggestions postprocessing: invalid code filtering
Suggestion
Postprocessing
Smart Suggestions
We show suggestion only after it was
checked by IDE correctness checks
User never sees semantically
incorrect code (wrong API calls,
non-existent variables, etc)
Pros: Cons:
Correctness check may time-out

Suggestions postprocessing: invalid code filtering
Suggestion
Postprocessing
Smart Suggestions
We show suggestion only after it was
checked by IDE correctness checks
User never sees semantically
incorrect code (wrong API calls,
non-existent variables, etc)
Pros: Cons:
Correctness check may time-out
No suggestions while indexing for
some languages (e.g. Java, Kotlin)

Reduce annoying suggestions: smart filtering

Reduce annoying suggestions: smart filtering

Reduce annoying suggestions: smart filtering
Language model hasn’t seen any of user actions

Reduce annoying suggestions: smart filtering
Language model hasn’t seen any of user actions
Language model can’t predict whether user will accept the suggestion

Language Model Input
Reduce annoying suggestions: smart filtering
for i in
range(10):
range(5):
numbers_list:
Language Model Predictions
Language model hasn’t seen any of user actions
Language model can’t predict whether user will accept the suggestion

Predict probability of suggestion acceptance?
Features to predict probability of acceptance:

Predict probability of suggestion acceptance?
Features to predict probability of acceptance:
Language model’s output probability

Predict probability of suggestion acceptance?
Features to predict probability of acceptance:
Language model’s output probability Length of the context

Predict probability of suggestion acceptance?
Features to predict probability of acceptance:
Language model’s output probability Length of the context
User typing speed

Predict probability of suggestion acceptance?
Features to predict probability of acceptance:
Language model’s output probability Length of the context
User typing speed
Entity under the caret (method/variable/…)

Predict probability of suggestion acceptance?
Features to predict probability of acceptance:
Language model’s output probability Length of the context
User typing speed
Entity under the caret (method/variable/…)

Filter Model
Language Model
Opportunity to use paid
JetBrains products for free
User Typing Code
User Actions: Early Access Program

Filter Model
Language Model
Opportunity to use paid
JetBrains products for free
User Typing Code
User Actions: Early Access Program
More logs are collected
(code is never collected to logs)

Language Model
Opportunity to use paid
JetBrains products for free
Filter Model User Typing Code
User Actions: Early Access Program
More logs are collected
(code is never collected to logs)
Can train model to predict suggestion
acceptance using the features describing
the context and the generated suggestion

-Acceptance rate increases by 20-50%
-Explicit cancel rate decreases by 50% and
becomes equal to the baseline (no Full
Line) level
-Total number of selection drops by 20%
Introducing filter model
Acceptance
rate
+ 20-50%

-Acceptance rate increases by 20-50%
-Explicit cancel rate decreases by 50% and
becomes equal to the baseline (no Full
Line) level
-Total number of selection drops by 20%
Introducing filter model
Acceptance
rate
Explicit
cancel
rate
- 50%
No full line
With filter model
Without filter model
+ 20-50%

-Acceptance rate increases by 20-50%
-Explicit cancel rate decreases by 50% and
becomes equal to the baseline (no Full
Line) level
-Total number of selection drops by 20%
Introducing filter model
Acceptance
rate
Explicit
cancel
rate
Total
number of
selection
- 50%
No full line
With filter model
Without filter model
- 20%
+ 20-50%

Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model

Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model

Model Inference
(execution)
Model inference
Language model is not an
executable

Model Inference
(execution)
Model inference
Efficient inference of language models is a rapidly developing field
Language model is not an
executable

Model Inference
(execution)
Model inference
We use native low-level open-source inference engine written in c++
to ensure efficient inference
Efficient inference of language models is a rapidly developing field
Language model is not an
executable

Model inference: llama.cpp progress
Generation time using open-source tools rapidly improves

Model inference: llama.cpp progress
Generation time using open-source tools rapidly improves

Spring 2023 Fall 2023
Full Line generation time
with llama.cpp, ms
50
100

We are contributing back to llama.cpp =)
Model inference: llama.cpp progress
Generation time using open-source tools rapidly improves
Llama.cpp provides us:
The most efficient generation
Out-of-the-box usage of GPU
on Apple Silicon (M1/M2/M3)
Spring 2023 Fall 2023
50
100
Full Line generation time
with llama.cpp, ms

Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model

Language Model
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching

Model
We train language models from scratch

Model
We use programming language specific models
We train language models from scratch

Model
We use programming language specific models
We use 100M LLama model with 1536 context size
We train language models from scratch

Model
We use programming language specific models
We use 100M LLama model with 1536 context size
The model occupies 50-500MB of RAM depending on the
context (model parameters are quantized to 4 bits)
We train language models from scratch

Model
We use programming language specific models
We use 100M LLama model with 1536 context size
The model occupies 50-500MB of RAM depending on the
context (model parameters are quantized to 4 bits)
Models are trained on AWS instances with 8 A100 GPUs
We train language models from scratch

Model
We use programming language specific models
We use 100M LLama model with 1536 context size
The model occupies 50-500MB of RAM depending on the
context (model parameters are quantized to 4 bits)
Models are trained on AWS instances with 8 A100 GPUs
Full training for one model costs ~$5000
We train language models from scratch

Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model

User Typing Code Smart Suggestions
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Training
Model Inference
(execution)
Suggestion
Postprocessing
Triggering Logic
Prompt
Construction
Caching
Language Model

Offline
Evaluation
Evaluation of the language model
Evaluation of the entire pipeline

Evaluation of the entire pipeline
Trigger code completion
in random places in file

Evaluation of the entire pipeline
Trigger code completion
in random places in file
The entire generation
pipeline is executed

Evaluation of the entire pipeline
Trigger code completion
in random places in file
The entire generation
pipeline is executed
Metrics

Evaluation of the entire pipeline
Trigger code completion
in random places in file
The entire generation
pipeline is executed
Metrics
Suggestions - ground truth match

Evaluation of the entire pipeline
Trigger code completion
in random places in file
The entire generation
pipeline is executed
Metrics
Suggestions - ground truth match Latency

User Typing Code Smart Suggestions
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Training
Model Inference
(execution)
Suggestion
Postprocessing
Triggering Logic
Prompt
Construction
Caching
Language Model

Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101
Local AI Code Completion by JetBrains: Details
Local AI Code Completion by JetBrains: Team

Team
14 people:
3 ML engineers 4 full-stack engineers
(ML + kotlin)
3 kotlin developers
1 c++ developer 0.5 QA 1 Product Manager 1 Team Lead

Takeaways

Takeaways
Local AI is rapidly developing

Takeaways
Local AI is rapidly developing
Making good AI code completion is much more than deploying
a good language model

Takeaways
Local AI is rapidly developing
Making good AI code completion is much more than deploying
a good language model
Using local AI models instead of cloud
makes your development process more secure

Takeaways
Local AI is rapidly developing
Making good AI code completion is much more than deploying
a good language model
Using local AI models instead of cloud
makes your development process more secure
It’s still possible to do a good AI code completion
based on local AI models

Beyond Autocomplete: Local AI Code Completion Demystified
Daniel Savenkov, Senior ML Engineer at JetBrains

Full Line Code Completion Arxiv Paper
(December 2023)