IObit Advanced SystemCare Pro Crack Download (Latest 2025)
100 views
105 slides
Apr 17, 2025
Slide 1 of 105
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
About This Presentation
🌍📱👉COPY LINK & PASTE ON GOOGLE https://allpcsoft.net/setup-download/
There might be lots of trouble with most old and slow PCs to slow them down. Equipped with Advanced SystemCare, your PC will stay error-free and smoother. IObit Advanced SystemCare has the New Performance Monitor to re...
🌍📱👉COPY LINK & PASTE ON GOOGLE https://allpcsoft.net/setup-download/
There might be lots of trouble with most old and slow PCs to slow them down. Equipped with Advanced SystemCare, your PC will stay error-free and smoother. IObit Advanced SystemCare has the New Performance Monitor to release memory to smooth PC running within seconds with one click.
Size: 5.57 MB
Language: en
Added: Apr 17, 2025
Slides: 105 pages
Slide Content
Beyond Autocomplete:
Local AI Code Completion Demystified
—
Daniel Savenkov, Senior ML Engineer at JetBrains
AI Code Completion 101
Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101
Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101
Local AI Code Completion by JetBrains: Details
Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101
Local AI Code Completion by JetBrains: Details
Local AI Code Completion by JetBrains: Team
Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101
Local AI Code Completion by JetBrains: Details
Local AI Code Completion by JetBrains: Team
Needs Cloud to operate
Typical AI Code Completion
Use Large Language Model
Needs Cloud to operate
Typical AI Code Completion
Use Large Language Model
User Typing Code Smart Suggestions
Large Language Model
in Cloud
Typical AI Code Completion
Can use really big models
Pros:
User Typing Code Smart Suggestions
Large Language Model
in Cloud
Typical AI Code Completion
Can use really big models
Pros:
No additional computations on
user side
User Typing Code Smart Suggestions
Large Language Model
in Cloud
Typical AI Code Completion
Can use really big models
Pros:
No additional computations on
user side
Cons:
Code is sent to the Internet
(poor security, poor latency)
User Typing Code Smart Suggestions
Large Language Model
in Cloud
Typical AI Code Completion
Can use really big models
Pros:
No additional computations on
user side
Cons:
Code is sent to the Internet
(poor security, poor latency)
Someone has to pay for the cloud
(either users or the company)
User Typing Code Smart Suggestions
Large Language Model
in Cloud
Can we go local?
Cmooon
You need me
Can we go local?
Ok ok
I’ll go away
Can we go local?
Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101
Local AI Code Completion by JetBrains: Details
Local AI Code Completion by JetBrains: Team
Full Line Code Completion by JetBrains
Full Line Code Completion by JetBrains
Local AI
Full Line Code Completion by JetBrains
Local AI
Uses tiny language model that runs on your laptop
Full Line Code Completion by JetBrains
Local AI
Uses tiny language model that runs on your laptop
Available out of the box in JetBrains IDEs
Full Line Code Completion by JetBrains
Local AI
Uses tiny language model that runs on your laptop
Available out of the box in JetBrains IDEs
Saves a vast amount of typing
Full Line Code Completion by JetBrains
Local AI
Uses tiny language model that runs on your laptop
Available out of the box in JetBrains IDEs
Saves a vast amount of typing
Checks semantic correctness of the generated code
Full Line Code Completion by JetBrains
Local AI
Uses tiny language model that runs on your laptop
Available out of the box in JetBrains IDEs
Saves a vast amount of typing
Checks semantic correctness of the generated code
Does not require any additional subsciption
Full Line Code Completion by JetBrains: feedback
Full Line Code Completion by JetBrains: feedback
Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101
Local AI Code Completion by JetBrains: Details
Local AI Code Completion by JetBrains: Team
User Typing Code Smart Suggestions
Language Model
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model
Measuring the code completion
How do we make sure that adding Full Line Completion
improves user experience?
JetBrains Early Access Program
AB-testing
Full Line enabled Full Line disabled
+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users
AB-testing
+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users
How it usually happens
+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users
How it happens at JetBrains
Fraction of code generated
AB-testing
+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users
Fraction of code generated
Explicit cancel rate
AB-testing
+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users
Fraction of code generated
Deletion after accepting rate
Explicit cancel rate
AB-testing
+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users
IDE performance degradation
AB-testing
Fraction of code generated
Deletion after accepting rate
Explicit cancel rate
+ Positive signal: Users
are enjoying new feature
- Negative signal: New
feature annoys users
Full Line Code Completion AB-sets results
Fraction of code
generated
+ 30-100%
Full Line Code Completion AB-sets results
Fraction of code
generated
+ 30-100%
Explicit
cancel rate
didn’t change
Deletion after
accepting rate
IDE
performance
didn’t change
didn’t change
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model
Annoying suggestions: invalid code
Suggestions postprocessing: invalid code filtering
Suggestion
Postprocessing
Smart Suggestions
We show suggestion only after it was
checked by IDE correctness checks
Suggestions postprocessing: invalid code filtering
Suggestion
Postprocessing
Smart Suggestions
We show suggestion only after it was
checked by IDE correctness checks
User never sees semantically
incorrect code (wrong API calls,
non-existent variables, etc)
Pros:
Suggestions postprocessing: invalid code filtering
Suggestion
Postprocessing
Smart Suggestions
We show suggestion only after it was
checked by IDE correctness checks
User never sees semantically
incorrect code (wrong API calls,
non-existent variables, etc)
Pros: Cons:
Correctness check may time-out
Suggestions postprocessing: invalid code filtering
Suggestion
Postprocessing
Smart Suggestions
We show suggestion only after it was
checked by IDE correctness checks
User never sees semantically
incorrect code (wrong API calls,
non-existent variables, etc)
Pros: Cons:
Correctness check may time-out
No suggestions while indexing for
some languages (e.g. Java, Kotlin)
Reduce annoying suggestions: smart filtering
Reduce annoying suggestions: smart filtering
Reduce annoying suggestions: smart filtering
Language model hasn’t seen any of user actions
Reduce annoying suggestions: smart filtering
Language model hasn’t seen any of user actions
Language model can’t predict whether user will accept the suggestion
Language Model Input
Reduce annoying suggestions: smart filtering
for i in
range(10):
range(5):
numbers_list:
Language Model Predictions
Language model hasn’t seen any of user actions
Language model can’t predict whether user will accept the suggestion
Predict probability of suggestion acceptance?
Features to predict probability of acceptance:
Predict probability of suggestion acceptance?
Features to predict probability of acceptance:
Language model’s output probability
Predict probability of suggestion acceptance?
Features to predict probability of acceptance:
Language model’s output probability Length of the context
Predict probability of suggestion acceptance?
Features to predict probability of acceptance:
Language model’s output probability Length of the context
User typing speed
Predict probability of suggestion acceptance?
Features to predict probability of acceptance:
Language model’s output probability Length of the context
User typing speed
Entity under the caret (method/variable/…)
Predict probability of suggestion acceptance?
Features to predict probability of acceptance:
Language model’s output probability Length of the context
User typing speed
Entity under the caret (method/variable/…)
Filter Model
Language Model
Opportunity to use paid
JetBrains products for free
User Typing Code
User Actions: Early Access Program
Filter Model
Language Model
Opportunity to use paid
JetBrains products for free
User Typing Code
User Actions: Early Access Program
More logs are collected
(code is never collected to logs)
Language Model
Opportunity to use paid
JetBrains products for free
Filter Model User Typing Code
User Actions: Early Access Program
More logs are collected
(code is never collected to logs)
Can train model to predict suggestion
acceptance using the features describing
the context and the generated suggestion
-Acceptance rate increases by 20-50%
-Explicit cancel rate decreases by 50% and
becomes equal to the baseline (no Full
Line) level
-Total number of selection drops by 20%
Introducing filter model
Acceptance
rate
+ 20-50%
-Acceptance rate increases by 20-50%
-Explicit cancel rate decreases by 50% and
becomes equal to the baseline (no Full
Line) level
-Total number of selection drops by 20%
Introducing filter model
Acceptance
rate
Explicit
cancel
rate
- 50%
No full line
With filter model
Without filter model
+ 20-50%
-Acceptance rate increases by 20-50%
-Explicit cancel rate decreases by 50% and
becomes equal to the baseline (no Full
Line) level
-Total number of selection drops by 20%
Introducing filter model
Acceptance
rate
Explicit
cancel
rate
Total
number of
selection
- 50%
No full line
With filter model
Without filter model
- 20%
+ 20-50%
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model
Model Inference
(execution)
Model inference
Language model is not an
executable
Model Inference
(execution)
Model inference
Efficient inference of language models is a rapidly developing field
Language model is not an
executable
Model Inference
(execution)
Model inference
We use native low-level open-source inference engine written in c++
to ensure efficient inference
Efficient inference of language models is a rapidly developing field
Language model is not an
executable
Model inference: llama.cpp progress
Generation time using open-source tools rapidly improves
Model inference: llama.cpp progress
Generation time using open-source tools rapidly improves
Spring 2023 Fall 2023
Full Line generation time
with llama.cpp, ms
50
100
We are contributing back to llama.cpp =)
Model inference: llama.cpp progress
Generation time using open-source tools rapidly improves
Llama.cpp provides us:
The most efficient generation
Out-of-the-box usage of GPU
on Apple Silicon (M1/M2/M3)
Spring 2023 Fall 2023
50
100
Full Line generation time
with llama.cpp, ms
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model
Language Model
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Model
We train language models from scratch
Model
We use programming language specific models
We train language models from scratch
Model
We use programming language specific models
We use 100M LLama model with 1536 context size
We train language models from scratch
Model
We use programming language specific models
We use 100M LLama model with 1536 context size
The model occupies 50-500MB of RAM depending on the
context (model parameters are quantized to 4 bits)
We train language models from scratch
Model
We use programming language specific models
We use 100M LLama model with 1536 context size
The model occupies 50-500MB of RAM depending on the
context (model parameters are quantized to 4 bits)
Models are trained on AWS instances with 8 A100 GPUs
We train language models from scratch
Model
We use programming language specific models
We use 100M LLama model with 1536 context size
The model occupies 50-500MB of RAM depending on the
context (model parameters are quantized to 4 bits)
Models are trained on AWS instances with 8 A100 GPUs
Full training for one model costs ~$5000
We train language models from scratch
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Inference
(execution)
Suggestion
Postprocessing
User Typing Code Smart Suggestions
Model Training
Triggering Logic
Prompt
Construction
Caching
Language Model
User Typing Code Smart Suggestions
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Training
Model Inference
(execution)
Suggestion
Postprocessing
Triggering Logic
Prompt
Construction
Caching
Language Model
Offline
Evaluation
Evaluation of the language model
Evaluation of the entire pipeline
Evaluation of the entire pipeline
Trigger code completion
in random places in file
Evaluation of the entire pipeline
Trigger code completion
in random places in file
The entire generation
pipeline is executed
Evaluation of the entire pipeline
Trigger code completion
in random places in file
The entire generation
pipeline is executed
Metrics
Evaluation of the entire pipeline
Trigger code completion
in random places in file
The entire generation
pipeline is executed
Metrics
Suggestions - ground truth match
Evaluation of the entire pipeline
Trigger code completion
in random places in file
The entire generation
pipeline is executed
Metrics
Suggestions - ground truth match Latency
User Typing Code Smart Suggestions
Offline
Evaluation
Online
Evaluation
And
Monitoring
Model Training
Model Inference
(execution)
Suggestion
Postprocessing
Triggering Logic
Prompt
Construction
Caching
Language Model
Local AI Code Completion by JetBrains: Bird-eye View
AI Code Completion 101
Local AI Code Completion by JetBrains: Details
Local AI Code Completion by JetBrains: Team
Team
14 people:
3 ML engineers 4 full-stack engineers
(ML + kotlin)
3 kotlin developers
1 c++ developer 0.5 QA 1 Product Manager 1 Team Lead
Takeaways
Takeaways
Local AI is rapidly developing
Takeaways
Local AI is rapidly developing
Making good AI code completion is much more than deploying
a good language model
Takeaways
Local AI is rapidly developing
Making good AI code completion is much more than deploying
a good language model
Using local AI models instead of cloud
makes your development process more secure
Takeaways
Local AI is rapidly developing
Making good AI code completion is much more than deploying
a good language model
Using local AI models instead of cloud
makes your development process more secure
It’s still possible to do a good AI code completion
based on local AI models
Beyond Autocomplete: Local AI Code Completion Demystified
Daniel Savenkov, Senior ML Engineer at JetBrains
—
Full Line Code Completion Arxiv Paper
(December 2023)