LLMs and AI Agents

Day 1 - Block 2

Training Objective at the Core 1

\(P \left (x_t \mid x_{<t}, \theta \right )\)

  • \(x_t\): next token
  • \(x_{<t}\): previous tokens
  • \(\theta\): learned parameters

AI agents are 🧅s

LLM at the center

It is still next-token prediction!1

Pre-training 1

Post-training 1

Stage Main goal Typical effect
Pre-training Generation Capability
Post-training Alignment Compliance

Tokens Can Become Actions 1

{"tool":"web_search","query":"latest ECB inflation forecast"}

sequenceDiagram
  participant M as LLM
  participant T as Tool/API
  M->>T: action token -> tool call
  T->>M: observation/result

Actions can pull in fresh context 1

flowchart LR
  Q[Prompt] --> R[Web-search]
  R --> K[Filter]
  K --> C[Read]
  C --> G[Continue generation]

Agents ❤️ Code 1

result = run_python("sum([2.1, 3.4, 5.0])")
context += f"Observed: {result}"
answer = llm(context)

Powerful!

flowchart LR
  C[Context] --> A[LLM]
  
  A -->|call| T[Tools]
  A -->|write| M[Memory]
  A -->|generate| END{END}
  A -->|generate| C
  
  T --> E[Environment]
  M --> C
  E --> C

  style A fill:#90EE90,stroke:#333,stroke-width:1px
  style E fill:#bdd0ff,stroke:#f7f9fa,stroke-width:1px
  

TODO (next revision)

  • Add sustainability implications of AI use
  • Cover energy, water, and e-waste tradeoffs

References

Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, et al. 2020. “Language Models Are Few-Shot Learners.” arXiv Preprint arXiv:2005.14165. https://arxiv.org/abs/2005.14165.
Hoffmann, Jordan, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, et al. 2022. “Training Compute-Optimal Large Language Models.” arXiv Preprint arXiv:2203.15556. https://arxiv.org/abs/2203.15556.
Kaplan, Jared, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. “Scaling Laws for Neural Language Models.” arXiv Preprint arXiv:2001.08361. https://arxiv.org/abs/2001.08361.
Lewis, Patrick, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, et al. 2020. “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.” In Advances in Neural Information Processing Systems. https://proceedings.neurips.cc/paper_files/paper/2020/hash/6b493230205f780e1bc26945df7481e5-Abstract.html.
Rafailov, Rafael, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. 2023. “Direct Preference Optimization: Your Language Model Is Secretly a Reward Model.” arXiv Preprint arXiv:2305.18290. https://arxiv.org/abs/2305.18290.
Schick, Timo, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. “Toolformer: Language Models Can Teach Themselves to Use Tools.” arXiv Preprint arXiv:2302.04761. https://arxiv.org/abs/2302.04761.
Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” In Advances in Neural Information Processing Systems. https://arxiv.org/abs/1706.03762.
Yao, Shunyu, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022. “ReAct: Synergizing Reasoning and Acting in Language Models.” arXiv Preprint arXiv:2210.03629. https://arxiv.org/abs/2210.03629.