“It would be unfair to the population to run the risk of the project not being approved simply because it was written by artificial intelligence.”
I agree. What is wrong with having AI create an output that we’re happy to sign off on? The only problem I can think of is that society likes having a person to blame when something goes wrong. But, in this case, we have a 36-member council who voted on the legislation, so they would bear ultimate responsibility. If the chatbot has made something up, then it’s the council’s job to do their due diligence, in the same way they’d be expected to if a person made something up.
“It may not always be able to account for the nuances and complexities of the law. Because ChatGPT is a machine learning system, it may not have the same level of understanding and judgment as a human lawyer when it comes to interpreting legal principles and precedent.”
Again, I agree with the statement. However, in this context we can make the argument that the council members, who are writing and signing off on the legislation, also don’t have the appropriate level of understanding. Why are we holding AI to a standard that we don’t expect of humans? Right now, we could say that AI is not worse than people, in some contexts, in terms of what gets produced. But, AI is almost certainly much quicker, cheaper, and therefore cost-effective than people.
We’re going to see more and more examples of AI-generated content being used in the wild. Because in many cases, it will be better than what people can produce on their own.
As the president of this council says at the end: “I changed my mind. I started to read more in depth and saw that, unfortunately or fortunately, this is going to be a trend.”