The traditional approach to working with large language models, prompt engineering, is evolving as models gain new capabilities. Models now engage in a richer "internal monologue" through tool usage, enabling them to reason and take actions. This shift has led to the development of agents, which can autonomously pursue goals. The complexity has moved from the prompt to the model's reasoning, utilizing tools like the terminal for achieving tasks. However, this raises concerns regarding control and security, necessitating a balance between freedom and constraints. The primary influence on agent choices lies in the design of their environment, not the scripting of behavior. Managing the evolving context window of a long-running agent is crucial, which requires context engineering. This contrasts with prompt engineering, focusing on providing guidance and information access for agents rather than perfecting the initial prompt. The agent experience, focusing on tool discoverability and information structure, becomes vital. Tools like Andrew Ng's Context Hub facilitate self-improvement through local and shared feedback loops. This evolution requires careful consideration of the interfaces we build for both humans and AI agents.
dev.to
dev.to
Create attached notes ...
