T
he UK Online Safety Act is quietly one of the most important pieces of legislation enacted by this government. Sure, there’s less competition. But as time goes on, we’re starting to see how this law will reshape the internet as it begins to take effect.
From last week’s story:
Social media companies have been told to “tame aggressive algorithms” that promote content that is harmful to children as part of Ofcom’s new code of safe practice.
Child safety regulations, introduced as part of the Online Safety Act, have led Ofcom to set tough new rules for how internet companies interact with children. We require services to either make their platforms child-safe by default or implement robust age checks to identify children and provide a safer version of the experience.
The Online Safety Act is not just child-focused, but these are some of the toughest powers given to Ofcom under the new regulatory regime. Websites will be required to implement age verification technology to know which users are children, or alternatively, to ensure that all content is safe for use by children.
The content that children see has to follow much stricter rules than the adult web, and some types of content, such as pornography and content about suicide, self-harm, and eating disorders, are strictly removed from young people’s feeds. It is forbidden.
But what’s immediately interesting is the requirement cited above. It is one of the first initiatives in the world to place strict requirements on the curation algorithms that underpin most of the biggest social networks, saying services such as TikTok and Instagram must be free of “violent, hateful or abusive material.” It will be mandatory to suppress the spread. Post “content that encourages online bullying or dangerous challenge” to your child’s account.
Some fear Ofcom is trying to have its cake and eat it too. After all, the easiest way to suppress such content is to block it without having to fiddle with the recommendation algorithm. Anything less is a gamble. Is it worth risking a large fine from Ofcom if you decide to allow violent content in children’s programming, even if you can claim to have suppressed it beyond normal levels? Is there?
It may seem like a fear that can be easily dispelled. Who are we to fight for the right to use violence against children? But I’m already concerned that the government’s well-intentioned awareness campaigns (perhaps about safe streets, maybe related to drug policy) are counting down the days until the pendulum swings the other way, suppressed or blocked under the rules. Jim Killock, director of the Internet policy think tank Open Rights Group, said that “educational and supportive content, especially related to sexuality, gender identity, drugs, and other sensitive topics, is being blocked by moderation systems.”I’m concerned that young people may not be able to access it,” he said.
Of course, there is opposition from the other side. The Online Safety Act, after all, was designed to fall squarely in the Goldilocks zone of policy.
Goldilocks policy theory is very simple. When Mama Bear says the government’s latest bill is too hot and Papa Bear says the government’s latest bill is too cold, know that the actual temperature is right.
Unfortunately, the Goldilocks theory sometimes fails. You learn that what you are actually sitting in front of is not a bowl of perfectly cooked porridge, but a roast chicken that has been baked in the oven from frozen. It’s frosty on the inside and burnt on the outside, which is bad for your health if you try. To eat it.
So while Killock is concerned about the chilling effect, others worry that the practice isn’t going far enough. Bevan Kidron, a multidisciplinary researcher and one of the leading proponents of online safety rules for children, worries that the whole thing is too broad to be useful.
She wrote on F.T.
However, this code is weak in design features. Research shows that live streaming and direct messaging are high risk, but there are few mandatory mitigations to address them. Similarly, the requirement for a measure to have an existing evidence base does not encourage new approaches to safety…how can we provide evidence that something doesn’t work if we don’t try it?
As we celebrate the arrival of the draft code, we should already be demanding that the holes in the code be fixed, exceptions re-addressed, and lobbyists contained.
The code is under discussion, but my sense is that it is a formality. All involved seem hopeful that the written rules will remain largely unchanged by the time they become binding later this year. But the battle over what a safe internet for kids means is only just beginning.
AI thinks, therefore AI exists
One of the reasons I still find the AI sector so fascinating is that I know that many readers have made up their minds about the AI sector rather than the whole thing, but how does artificial intelligence work? Because we haven’t yet learned something very basic about how to do things.
Reason step by step. One of the most useful discoveries in the field of “prompted engineering” is that LLMs such as GPT are much better at answering complex questions when asked to explain their thinking step-by-step before giving an answer. They said it was possible.
There are two possible reasons for this: “memory” and “thinking” can be personified. The first is that LLMs do not have the ability to reason silently. All they do is generate the next word [technically, the next “token”, sometimes just a fragment of a word]. This means that unless you actively generate new tokens, your ability to process complex thoughts is limited. By asking you to “think step by step,” you allow the system to write down each part of your answer and use those intermediate steps to reach your final conclusion.
Another possibility is that step-by-step reasoning literally makes the system Let’s think about it more. Each time the LLM outputs a token, it passes through the neural network once. No matter how difficult the next token is, you can’t think more or less of what it should be (this is wrong, but it’s wrong in the same way that everything you learned about atoms in school is wrong). -Step thinking may help change this situation. The more passes the system takes to answer a question, the more thinking time it has. If so, thinking in stages is less like a scratchpad and more like prolonging time while answering a difficult question.
So which one is it?
A new paper suggests the latter:
Chain of thought responses from language models improve performance on most benchmarks. However, it remains unclear to what extent these performance improvements are due to human-like task decomposition or simply to more computations enabled by the additional tokens. The transformer replaces the thought chain with meaningless filler tokens (e.g. “……”) to solve two difficult algorithmic tasks that could not be solved if the response was answered without intermediate tokens. Indicates that it can be used.
In other words, if you teach a chatbot to output a dot every time you want to think about it, get better at thinking. That’s easier said than done, the researchers warn. However, this finding has important implications for how LLM is used. One reason for this is that it has been suggested that: what What you write when the system exhibits behavior may not have much bearing on the final answer. If you can replace your reasoning with lots of periods, you may have been doing the actual work in your head anyway.
Wider TechScape
Source: www.theguardian.com