Artificial intelligence models can be made to spout gibberish when one of the billions of numbers that make up them is changed.
Large Language Models (LLMs), like the one behind OpenAI's ChatGPT, contain billions of parameters or weights. These parameters, or weights, are numbers used to represent each “neuron” in a neural network. These are things that are adjusted and fine-tuned during training so that the AI ​​can learn abilities such as generating text. The inputs are passed through these weights and the statistically most likely output is determined.
Source: www.newscientist.com