In the real future, will this question seem silly to ask?
People talk about regulation being so important, but how can you regulate Artificial Intelligence, a technology that will teach itself and just like any rogue human, can then choose to ignore the regulators and regulations?
Why would it do the above?
Maybe "it" knows better? Or perhaps it will just "think" it does.
Occasionally, especially in the early days, "its" lofty thoughts may well prove to be totally wrong for its "owner" or should I say controller. So, how do we stay in control?
But this is assuming the stance point is correct that we as human beings should be in control? Because just maybe, as things evolve, AI will be more often right and make the right decisions. At that point, is it then the responsible and correct thing to do to try to regulate it? "Would you expect a chimp to try to regulate a human" is perhaps not quite the analogy, but I cannot think of anything similar to compare this...