Prompt engineering for iOS developers (How LLMs and GenAI work)
ssuser8fd0ea
147 views
42 slides
May 02, 2024
Slide 1 of 42
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
About This Presentation
Prompt engineering for iOS developers
Size: 1.76 MB
Language: en
Added: May 02, 2024
Slides: 42 pages
Slide Content
Prompt engineering for iOS developers
How LLMs work
Generative AI as a service
Prompt engineering patterns
Pair programming with AI tools
How LLMs work
Generative Pre-trained Transformer
How LLMs work - GPT architecture
Input processing
Input
Output
Decoder
Contextual and Session Handling
Safety and Content Filters
Bootstrap prompts
How LLMs work - Tokenisation
Swift is a powerful and intuitive programming language for all Apple platforms.Swift is a powerful and intuitive programming language for all Apple platforms.
How LLMs work - Token Embeddings
-0.05,
0.017
…
-0.01
Vectors with 12288 dimensions (features) and positional encoding
-0.03,
0.118
…
-0.02
-0.01,
0.007
…
-0.004
…
Swift is a powerful and intuitive programming language for all Apple platforms.
How LLMs work - Token Embeddings
This vector encapsulates the semantic meaning of Google, without the
semantics of the word Kotlin, and also includes the semantics of the
word Swift.
Google - Kotlin + Swift = ?
How LLMs work - Token Embeddings
Google - Kotlin + Swift = Apple
How LLMs work - Token Embeddings
apple, 0.426
spotify, 0.396
amazon, 0.393
netflix, 0.387
yahoo, 0.382
snapchat, 0.371
huawei, 0.368
Google - Kotlin + Swift =
How LLMs work - GPT architecture
Input processing
Transformer layer
Transformer layer
Transformer layer
120 layers (GPT4)
Next word prediction
Decoder
Input
Output
Input +
next word
Generative AI as a service - OpenAI API
Generative AI as a service - OpenAI API
{
"model": "gpt-3.5-turbo",
"messages": [{
"role": "user",
"content": "Describe Turin in one sentence"
}],
"temperature": 0.7
}
POST https://api.openai.com/v1/chat/completions
Generative AI as a service - OpenAI API
struct AIMessage: Codable {
let role: String
let prompt: String
}
struct AIRequest: Codable {
let model: String
let messages: [AIMessage]
let temperature: Double
}
extension AIRequest {
static func gpt3_5Turbo(prompt: String) -> AIRequest {
.init(model: "gpt-3.5-turbo",
messages: [
.init(role: "user", prompt: prompt)
],
temperature: 0.7
)
}
}
Generative AI as a service - OpenAI API
}
@MainActor
func invoke(prompt: String) async throws -> String {
Generative AI as a service work - OpenAI API pricing
Model Input Output
gpt-3.5-turbo $1.50 / 1M tokens $2.00 / 1M tokens
gpt-4 $30.00 / 1M tokens $60.00 / 1M tokens
gpt-4-32k $60.00 / 1M tokens $120.00 / 1M tokens
Generative AI as a service - Amazon Bedrock
Generative AI as a service - Amazon Bedrock
Generative AI as a service work - Amazon Bedrock
Model Input Output
Titan Text Lite$0.30 / 1M tokens $0.40 / 1M tokens
Claude 3 Haiku $0.25 / 1M tokens $0.15 / 1M tokens
Titan Image
Generator
$0.005 / per image
Generative AI as a service work - Amazon Bedrock
AWS
API GatewayLambdaBedrock
iOS app
Generative AI as a service work - Amazon Bedrock
def invoke_model(prompt: str):
try:
enclosed_prompt = "Human: " + prompt + "Assistant:"
Generative AI as a service work - Amazon Bedrock
def lambda_handler( event, context):
body = json.loads(event["body"])
prompt = body["content"]
if (event["headers"]["authorization"] != key):
return {
'statusCode': 401
}
try:
completion = invoke_model(prompt = prompt)
return {
'statusCode': 200,
'body': json.dumps({"content": completion})
}
except Exception as e:
return {
'statusCode': 400,
'body': json.dumps(str(e))
}
Generative AI as a service work - Amazon Bedrock
POST https://[uuid]...amazonaws.com/dev/completions
{
"content": "Describe Turin in one sentence"
}
Generative AI as a service work - Amazon Bedrock
def schedule_prompt_template(content: str) -> str:
return f"""
<context>
...
Swift Heroes 2024 schedule ...
...
<context>
Task for Assistent:
Find the most relevant answer in the context
<question>
{content}
<question>
"""
Generative AI as a service work - Amazon Bedrock
def lambda_handler( event, context):
…
try:
templatedPrompt = prompt
Generative AI as a service work - Amazon Bedrock
POST https://[uuid]...amazonaws.com/dev/completions
{
"content": "Describe Turin in one sentence”,
"template": “schedule"
}
Generative AI as a service work - Amazon Bedrock
@MainActor
func invokeBedrock(content: String, template: AITemplate = .schedule) async throws -> String {
…
request.httpBody = try encoder.encode(BedrockAIRequest(content: content,
template: template.rawValue))
…
let bedrockResponse = try decoder.decode(BedrockAIResponse.self, from: data)
return bedrockResponse.content
}
let answer = try await aiService.invokeBedrock(content: text, template: .schedule)
struct BedrockAIRequest: Codable {
let content: String
let template: String
}
enum AITemplate: String {
case schedule, chat
}
Generative AI as a service work - Amazon Bedrock
Generative AI as a service work - Amazon Bedrock
Prompt engineering
The emerging skill set focused on designing and optimizing inputs for AI
systems to ensure they generate optimal outputs.
Prompt engineering patterns
One-shot, Few-shot and Zero-shot prompts
Summarize all the talks between context tags
Provide output as an array of json objects with a title, one main topic, and an array of tags.
The tags should not repeat the main topic.
<example>
Title: "Inclusive design: crafting Accessible apps for all of us"
Abstract: "One of the main goals for an app developer should be to reach everyone. ...
<result>
{{
"title": "Inclusive design: crafting Accessible apps for all of us"
"topic": "Accessibility"
"tags": ["user experience", "design", "assistance", "tools", "SwiftUI", "Swift", "Apple"]
}}
</result>
<context>
// Swift Heroes 2024 schedule …
</context>
</example>
Prompt engineering patterns
[
{
"title": "Building Swift CLIs that your users and contributors love" ,
"topic": "Swift CLIs",
"tags": ["terminal", "Tuist", "contributors"]
},
{
"title": "Macro Polo: A new generation of code generation" ,
"topic": "Swift Macros",
"tags": ["code generation", "modules", "examples"]
},
{
"title": "Typestate - the new Design Pattern in Swift 5.9" ,
"topic": "Typestate Design Pattern" ,
"tags": ["state machines", "memory ownership" , "generics", "constraints", "safety"]
},
{
"title": "A Tale of Modular Architecture with SPM" ,
"topic": "Modular Architecture" ,
"tags": ["MVVM", "SwiftUI", "SPM"]
},
...
]
Prompt engineering patterns
Persona and audience
How does async/await work in Swift?
Act as an Experienced iOS developer
Explain to 12 years old
Prompt engineering patterns
Chain-of-thought
How does async/await work in Swift?
Act as an Experienced iOS developer
Explain to the Product manager
Follow this Chain-of-thought:
- Technical details using understandable to the audience analogy
- Benefits relevant for the audience
- Additional things to consider
Pair programming with AI tools - Copilot
Pair programming with AI tools - Copilot
Pair programming with AI tools - Copilot
Pair programming with AI tools - Copilot Chat
Pair programming with AI tools - ChatGPT
Discovering Swift Packages
Converting from legacy code
Writing unit tests and generating test data
Multiplatform development
Brainstorming and evaluating ideas