I agree. The only thing that you can do is let a human panel verify every output but that defeats the purpose. In time, and that may be soon, we may be able to trust it with critical tasks. One of the issues now is jailbreaks, because another system may be able to hill-climb an attack on a static neural network, these attacks could be mitigated by having multiple neural network models verifying eachother.
I think the current best ChatGPT has 10 trillion neurons, which is allot more than the human brain. And according to Nvidia it may be possible to train models with 10 quadrillion neurons in about 10 years. Excluding algorithmic improvements we need just 2 things right now, compute and data. You can bet they will do anything to get any data including breaking every law in the book.
Combine super aggresive data mining + ever expanding AI and our freedom will be stripped. We still will have perceived freedom but we'll just be puppets on strings and it's already happening right now. A very large percentage of the population is just hooked to their smartphone, endlessly scrolling down on their feed, which in turn feeds the AI even more data.