Hi Charlie! Absolutely yes. Everything you're describing is also about how to _manage programmers_. In fact everything about the challenges of working with AI reminds me of the challenges of working with people. So, as a programmer using AI, you've (perhaps unintentionally) promoted yourself to be a manager of a programmer.
Maybe we can make some formal rules about how to write prompts? Kind of restricted, but precise language that the AI can easily interpret without ambiguity. We could call it Python, and call the LLM a Compiler :)
Ha ha; but a "programmer" using higher-level natural human language and a way to supply lots of relevant context can move much faster than someone who insists on typing each line of Python themselves.
Write a Python implementation of Newton's method for solving the nonlinear system f₁(x,y)=4x−2xy=0 and f₂(x,y)=2y+xy−2y²=0, printing the last 10 of 20 iterations.
===
What do you think of that prompt?
(i used AI to get that prompt. then I fed that prompt into a coding AI, and it gave me something that almost worked. after i gave it that error message, it revised it and came up with this: https://pastebin.com/ku73nQma . No idea if that code is correct!)
It was just a computation of a Jacobian, part of an ML exercise.
Your prompt is OK, but you are not sure that the answer is correct. I tested the code I wrote by hand and know it to be correct. I also understand what it is doing.
But the larger point is that an LLM is statistical and can give you different answers to the same prompt. I wonder what would happen if you rephrase your prompt a little.
Thanks, love the article. It resonates with me from the vectors of computational linguistics and Natural Language Understanding (NLU). In other words GenAI does not understand meaning and may also lack sufficient context and understanding of intention to provide an accurate and elegant solution. The programming gaps of precision in making a PBJ are also gaps in the processing of meaning and "grok." https://en.wikipedia.org/wiki/Grok
This confirms my suspicions that it's usually the prompter's fault, not the AI's. Still, as models get more advanced and smart, we'll be able to provide sensible instructions and follow novel and reasonable plans.
Hi Charlie! Absolutely yes. Everything you're describing is also about how to _manage programmers_. In fact everything about the challenges of working with AI reminds me of the challenges of working with people. So, as a programmer using AI, you've (perhaps unintentionally) promoted yourself to be a manager of a programmer.
Maybe we can make some formal rules about how to write prompts? Kind of restricted, but precise language that the AI can easily interpret without ambiguity. We could call it Python, and call the LLM a Compiler :)
Ha ha; but a "programmer" using higher-level natural human language and a way to supply lots of relevant context can move much faster than someone who insists on typing each line of Python themselves.
Citation needed. :)
How would you prompt an LLM to do this? (I know bad formatting)
------------------------
import numpy as np
def f(x):
x0, x1 = x[0, 0], x[1, 0]
return np.array([[4*x0 - 2*x0*x1], [2*x1 + x0*x1 -2*x1**2]])
def JI(x):
x0, x1 = x[0, 0], x[1, 0]
d = (4-2*x1)*(2-x0-4*x1) + 2*x0*x1
return (1/d)*np.array([[2-x0-4*x1, 2*x0],[-x1, 4-2*x0]])
x0 = float(input("x0: "))
x1 = float(input("x1: "))
x = np.array([[x0],[x1]])
N = 20
for i in range(N):
x = x - JI(x) @ f(x)
if i > (N - 10):
print("%4d: (%0.8f, %08f)" % (i, x[0,0], x[1,0]))
Write a Python implementation of Newton's method for solving the nonlinear system f₁(x,y)=4x−2xy=0 and f₂(x,y)=2y+xy−2y²=0, printing the last 10 of 20 iterations.
===
What do you think of that prompt?
(i used AI to get that prompt. then I fed that prompt into a coding AI, and it gave me something that almost worked. after i gave it that error message, it revised it and came up with this: https://pastebin.com/ku73nQma . No idea if that code is correct!)
It was just a computation of a Jacobian, part of an ML exercise.
Your prompt is OK, but you are not sure that the answer is correct. I tested the code I wrote by hand and know it to be correct. I also understand what it is doing.
But the larger point is that an LLM is statistical and can give you different answers to the same prompt. I wonder what would happen if you rephrase your prompt a little.
Humans also give different answers to the same question which is why they cant be trusted either.
Sure, but do I want an expensive computer system to give me unreliable answers?
Thanks, love the article. It resonates with me from the vectors of computational linguistics and Natural Language Understanding (NLU). In other words GenAI does not understand meaning and may also lack sufficient context and understanding of intention to provide an accurate and elegant solution. The programming gaps of precision in making a PBJ are also gaps in the processing of meaning and "grok." https://en.wikipedia.org/wiki/Grok
This confirms my suspicions that it's usually the prompter's fault, not the AI's. Still, as models get more advanced and smart, we'll be able to provide sensible instructions and follow novel and reasonable plans.