Part 1: The Beginning of Everything - The Fundamentals of Deep LearningBefore embarking on our main journey, we will explore the fundamental principles of deep learning, the backbone of modern AI. We will easily break down, with visual aids, how artificial neural networks mimic the human brain to le...
Part 1: The Beginning of Everything - The Fundamentals of Deep LearningBefore embarking on our main journey, we will explore the fundamental principles of deep learning, the backbone of modern AI. We will easily break down, with visual aids, how artificial neural networks mimic the human brain to learn patterns, and the concepts of Backpropagation and Gradient Descent, which are the processes models use to find correct answers. You will come to understand how these foundational principles become the master key to training massive Transformer models with billions of parameters.Part 2: An Innovation in Understanding Context - The Transformer Architecture. In Part 2, "Attention Is All You Need."Tokenization and Embedding: We will learn how computers convert human language into numbers and represent the meaning of words and sentences in a vector space through techniques like Byte Pair Encoding (BPE) and Word2Vec. Attention Mechanism: We will delve into 'Self-Attention,' the most innovative concept of the Transformer. The principle of dynamically calculating the importance of words to each other within a sentence to grasp context will be explained through the concepts of Query, Key, and Value. Multi-Head Attention and Architecture: We will visualize the process of extracting rich contextual information by analyzing a sentence from multiple perspectives using several 'attention heads' rather than a single one. We will provide an overview of the entire structure, examining how the encoder and decoder interact to perform complex tasks such as translation, summarization, and question-answering. Part 3: LLM and AI Agents. We will discuss the process of fine-tuning pre-trained models on vast amounts of data for specific purposes, and the importance of Retrieval-Augmented Generation (RAG) technology with LLM hallucinations. Furthermore, we will introduce the concept and real-world examples of 'AI Agents' that go beyond simply answering questions to independently plan, use tools, and solve problems. Part 4: The Future of Development - Vibe Coding Finally, we will introduce the new wave these AI technologies have brought to the software development scene: "Vibe Coding." This is a development method where developers no longer write every line of code manually but rather converse and collaborate with AI through natural language to rapidly implement ideas into prototypes.Core Tool Analysis: We will demonstrate the pros, cons, and practical applications of the best AI coding assistant tools available today, such as GitHub Copilot, Cursor, Gemini-CLI, and OpenAI Codex, through live examples. Productivity Revolution: We will showcase a process where a Product Requirements Document (PRD) is given to an AI agent, which then automatically generates an initial version of the entire application. This will illustrate how developers can break free from repetitive tasks to focus more on creative problem-solving.
Study
BIM, GIS, Facility Management, IoT,Scan to BIM and AX
12 books publication
https://github.com/mac999
Study
Members of the Nobel
Committee for Chemistry at the
Royal Swedish Academy of
Sciences explain the work of
2024 Nobel Prize in Chemistry
winners David Baker, Demis
Hassabis and John M.
Jumper.JONATHAN
NACKSTRAND/AFP via Getty
Images
AI Pioneers Geoffrey
Hinton And John
Hopfield Win Nobel
Prize For Physics |
Latest News | WION
Study
실습권장사양
Hugging Face –The AI community building the future.회원가입(무료옵션)
GitHub회원가입(무료옵션)
Microsoft Copilot: Your AI companion회원가입
ChatGPT회원가입
Claude회원가입
LLM & AI Agent
Trend
mac999/LLM-RAG-Agent-Tutorial: LLM-RAG-
Agent-Tutorial
AX Era
Members of the Nobel
Committee for Chemistry at the
Royal Swedish Academy of
Sciences explain the work of
2024 Nobel Prize in Chemistry
winners David Baker, Demis
Hassabis and John M.
Jumper.JONATHAN
NACKSTRAND/AFP via Getty
Images
AI Pioneers Geoffrey
Hinton And John
Hopfield Win Nobel
Prize For Physics |
Latest News | WION
AX Era
AX Era
LLM
Large Language Models as General Pattern Machines
Multi AI Agent
Yuheng Cheng, 2024, Exploring Large Language Model based
Intelligent Agents: Definitions, Methods, and Prospects
Vibe coding. No coding
바이브
코딩
This Game Created by AI 'Vibe Coding' Makes $50,000 a Month. Yours Probably Won’t, Wix Acquires Six-month-old AI “Vibe Coding” Startup Base44 for
$80MCash, Cognizant’s Vibe Coding Lesson for Indian IT, Vibe Coded a Website With My Daughter Using an AI Tool Called Bolt -Business Insider
Vibe Coding: The Future of
Software Development or Just a
Trend? -Lovable Blog
Build Apps with AI in Minutes | Base44
Vibe coding. No coding
Vibe coding. No coding
Gemini CLI vs Claude Code vs Cursor –Which is the best option for coding? –Bind AI IDE
Is Vibe Coding the Future of Software Development? A Deep Dive into AI’s Role
OpenAI Codex: Transforming Software
Development with AI Agents -DevOps.com
Vibe coding. No coding
노코드(No-code) 확산
Daddy Makers: 노코드서비스비교분석하기
Vibe coding. No coding
소프트웨어 엔지니어링 AI 에이전트 기술
Popular AI Agents for Devs: Chatdev, SWE-Agent & Devin [Example Project]
Software Development Life Cycle Models and Methodologies -Mohamed Sami
AI
foundation
mac999/AI_foundation_tutorial
Deep learning
Deep Learning Neural Networks Explained in
Plain Englishand Becoming Human
ŷ=f(W⋅x+b)
ŷ
Deep learning
Fei-Fei Li & Ehsan Adeli,
2024, Stenford University
Backpropagation
Deep learning
Fei-Fei Li & Ehsan Adeli,
2024, Stenford University
Backprop: Rumelhart, Hinton, and Williams, 1986
∆??????
??????=−α
�??????
��
??????
??????
??????��??????=??????+∆??????
Backpropagation
Deep learning
Gradient Descent and Cost Function in Python -
Startertutorials
ŷ=f(W⋅x+b) target=minLOSS(y-ŷ)
Deep learning
Enhancing Multi-Layer Perceptron Performance:
Demystifying Optimizers | by Anand Raj |
Towards AI
ŷ=f(W⋅x+b)
target=minLOSS(y-ŷ)
Transfomer
mac999/AI_foundation_tutorial
Token
Token ID
How 10
Are 4
You 13
Token
BPE(Byte Pair Encoding)
Understanding Byte Pair Encoding
(BPE) in Large Language Models
Embedding
http://suriyadeepan.github.io/
Intro to Word Embeddings and
Vectors for Text Analysis.
Embedding
for center_word, context_words in dataset:
center_vec = word_embedding(center_word)
for context in context_words:
context_vec = word_embedding(context)
loss += negative_sampling_loss(center_vec, context_vec)
loss.backward()
optimizer.step()
Word2Vec (Google,2013)
Skip-gram과CBOW(Continuous Bag of Words) 두가지학습방법사용.
Skip-gram은중심단어를입력으로 주고,주변단어를예측. CBOW는주변단어들로 중심단
어를예측
“The quick brown fox jumps”문장중심단어"fox”의윈도크기가2라면,주변단어는"
brown"과"jumps"가됨
Word2Vec은주변단어를예측하는 loss를최소화하며 학습. 임베딩모델들은 주로대규모
텍스트데이터에서 비지도학습방식으로 학습
Sequence semantic similarity
Computing semantic similarity of texts based on
deep graph learning with ability to use semantic
role label information | Scientific Reports
Advances in Semantic Textual Similarity
Attention
Link: 딥러닝모델트랜스포머 인코더핵심코드
구현을통한동작메커니즘 이해하기
Transform Problem
Masking in Transformer Encoder/Decoder
Models -Sanjaya’s Blog
auto-regressive manner
Sequence context calculation
Query
Key
Sequence context calculation
Cosine similiarty for seqnce context calculation
Decoding strategies in
Decoder models (LLMs)
-Sanjaya’s Blog
Sequence context calculation
Attention Mask
Attention equation
Query = Q x Qw
Key = K x Kw
Value = V x Vw
Attention score matrix
Cross attention English
Context Space
K V Q
Ehow
Eare
Eyou
Korean
Context Space
K잘
K지내
K너
Ehow
Eare
Eyou
English
Context Space
Vw
Zcontext
(Contextual
embedding
vector)
Add & Norm
FF
Add & Norm
Linear
Softmax
Logits
(len(vocabulary))
K잘
Linear Transformation
Weighted Linear Combination
Self Attention Space
Contextual embedding
"나는학생" → 어텐션→ out → FFN → logits (10000차원) → softmax → "입니다" (예측)
source code
Self-Attention 을통과한벡터값은 어떤문맥(문장)에놓이느냐에 따라그의미를반영하는 고
유한벡터표현을 가지게됨. 예를들어, "과일사과"의임베딩과 "행위사과"의임베딩은 주변
단어의영향을받아서로다른벡터값을가지게됨
Multihead Attention
Position encoding
how are you ? See you soon <EOS><PAD><PAD>
1 2 3 4 5 6 7 8 9 10
Learning Position with
Positional Encoding -
Scaler Topics
pos = position of word in sequence
d = embedding dimension
i = index in embedding vector. 0 <= I <= d/2
A Gentle Introduction
to Positional Encoding
in Transformer Models,
Part 1 -
MachineLearningMaster
y.com
Transformer Architecture
Daddy Makers: 딥러닝모델트랜스포머 인코더핵
심코드구현을통한동작메커니즘 이해하기
Five most popular similarity measures implementation in python -Dataaspirant
Tutorial 6: Transformers and Multi-Head
Attention —UvA DL Notebooks v1.2
documentation
Training dataset in Transformer
source code
디코더입력(힌트) 디코더출력(예측Logits) 정답라벨 Loss 계산
<sos> (0) Logit 1(예: {'너': 0.3, '잘': 0.1, ...})너(100)
Logit 1의예측과'너'가얼마
나다른지계산
너(100) Logit 2(예: {'너': 0.1, '잘': 0.4, ...})잘(101)
Logit 2의예측과'잘'이얼마
나다른지계산
잘(101) Logit 3(예: {'있니': 0.5, '있다': 0.2, ...})있니(102)
Logit 3의예측과'있니'가얼
마나다른지계산
있니(102)
Logit 4(예: {'<eos>': 0.6, '입니다':
0.1, ...})
<eos> (1)
Logit 4의예측과'<eos>'가
얼마나다른지계산
Step 1. Tokenization
종류 목적 원본 토큰화결과
인코더입력 번역할소스문장 how are you [10, 11, 12]
디코더입력
정답을한칸씩밀어서모델에게 힌트로제
공(Shifted Right)
<sos> 너잘있니 [0, 100, 101, 102]
정답라벨 모델이각단계에서 예측해야 할실제정답 너잘있니<eos>[100, 101, 102, 1]
<sos>: 0, <eos>: 1, <pad>: 2. how: 10, are: 11, you: 12. 너: 100, 잘: 101, 있니: 102
Step 2. Train dataset preparation
Step 3. Encoder Forward Pass > Context vector (Self attention. English feature)
Step 4. Decoder Forward Pass > Context vector (Cross attention. English + Korean feature)
Step 5. Calculate Logits Loss to labels (Cross-Entropy Loss)
Step 6. Backpropagation to decrease different between How are you and 너잘있니<eos)
Training transformer model
1.처음엔QK^T가의미없는유사도를 계산함→ softmax 후V를평균해서 출력
2.이결과가예측라벨(예: 다음단어)과멀면loss 증가
3.역전파로 Q, K, V를만드는가중치WQ, WK, Wv가업데이트됨
4.배치데이터셋에 대해1-3을반복하면서 각QKV가문맥에서 다른역할을하도록학습
Scaled Dot Product Attention
이모듈은어텐션의 기본동작을수행하는 핵심
요소.
입력으로 주어진Q(Query), K(Key), V(Value) 벡터
를이용하여 다음연산을수행
Multi-Head Attention
어텐션을 단일벡터로계산하면 정보손실이크
기때문에, 여러개의어텐션"헤드"를병렬로실
행한후결과를concat하여사용
Positional Encoding
트랜스포머는 순서를고려하지 않기때문에각
단어의위치정보를인코딩
이를위해사인(sin), 코사인(cos) 함수를기반으
로위치인코딩벡터를생성
class ScaledDotProductAttention(nn.Module):
def forward(self, Q, K, V):
scores = torch.matmul
(Q, K.transpose(-2, -1)) / math.sqrt(d_k)
attn = F.softmax(scores, dim=-1)
return torch.matmul(attn, V)
class MultiHeadAttention(nn.Module):
def forward(self, Q, K, V):
# Q, K, V 선형변환및split
# 각헤드별attention 계산
# concat 후출력선형변환
return output
class PositionalEncoding(nn.Module):
def forward(self, x):
# sin, cos 벡터계산후입력에더함
return x + self.pe[:, :x.size(1)]
class EncoderLayer(nn.Module):
def forward(self, x):
x = x + self_attn(x, x, x) # Residual
x = LayerNorm(x)
x = x + FFN(x) # Residual
x = LayerNorm(x)
return x
Vibe coding
google-gemini/gemini-cli: An open-source AI agent that brings the power of Gemini
directly into your terminal.
Daddy Makers: 바이브코딩을위한구글
Gemini CLI 도구분석및사용
gemini cli vibe coding demo
Vibe coding
gemini cli vibe coding demo
Vibe coding
gemini cli vibe coding demo
gemini-cli/docs/cli/commands.md at main · google-gemini/gemini-cli
Vibe coding
gemini cli vibe coding demo
> make photoshop web app using three.js, bootstrap. Menus includes layer, line, arc,
circle, fill color with tranparent, border color, zoom in/out, pan, download file as JPG
Vibe coding
gemini cli vibe coding demo
Vibe coding
gemini cli vibe coding demo
Vibe coding
Hands-on
Codex
Vibe coding
openai/codex: Lightweight coding agent that runs in your terminal
Daddy Makers: OpenAI 바이브코딩지원멀
티에이전트 Codex 도구사용법
Vibe coding
openai/codex: Lightweight coding agent that runs in your terminal
Daddy Makers: OpenAI 바이브코딩지원멀
티에이전트 Codex 도구사용법