Except it's still a murky and misunderstood area even now, it's far from being sorted out. The fact that people like Musk, Zuckerburg and Hawking can all disagree just proves how contentious it is.
I happen to know quite a bit about modern machine learning and AI. Asimov's 3 laws are known to everyone in the field, but they simply don't apply in the real world. What's really interesting, once you get down into it is that alien intelligence is, well, alien. It's hard for us to reason about. That's why I mentioned dolphins etc. I don't think we're really that far away from AI that resembles dolphins more than humans. Actually, worms or cockroaches first, but you get the idea. Real "human-like" AI is much further away.
What we are moving towards is a more generalised form of AI that we don't fully understand and which has exceptionally powerful capabilities in some areas (face recognition across huge databases etc) but is completely lacking in others. It certainly won't think or act like us. Data, with his lack of emotion & imagination is a slightly naive simplification, but the basic principle holds. If you look at animals, we tend to relate more strongly to ones we can easily anthropormorphise, like dogs, because they have similar social structures etc. But an octopus is probably more intelligent that most dogs, yet we cannot relate to them, so it's much harder for us to comprehend if they are intelligent at all. People are going to disagree about this for a long time, perhaps even in a universe with humanoid aliens.
Right now there is a supposed debate about AI: whether it should have rights, or whether there should be an "ethical black box" (heavily inspired by Asimov); but most of the arguments are put forth either by people with agendas, or who don't understand the current state of ML. There is much arguing around the same hypotheticals Asimov explored in the 40s and 50s, while ignoring the true problems about things like privacy and concentration of power in corporations like Facebook. The usual sci-fi parallel (including Data) is the robot of incredible strength & speed, though in fact Skynet is probably closest to the current reality.
That part is getting a bit outside the scope of that episode, but I don't think it's fair to assume these issues will be completely sorted out before it is a extant issue, whether that is 300 years from now or only 30. In fact, it's far more likely we will have ill-conceived laws driven by popular misconceptions. Laws are made by politicians, often ignoring the advice of experts.
The detail of the ultimate debate won't be exactly the same, but the basic problem is that we do not have a good understanding of cognition, reasoning or sentience. That's unlikely to be resolved until we are able to agree on what is or isn't sentient. If we can't tell whether a dolphin or octopus should have equal rights to a human, we're even less likely to be able to make that judgement about a machine.
The reality is likely going to be more confusing, because we may end up having machines that are sentient, but that can also be losslessly cloned at any point in time. That completely defies everything we currently know, or think we know, about life. It would change the definition of death, for one thing. Some people believe that will never happen because they assume the brain is a quantum system, but actually, all the evidence says it isn't.
Anyway, it's worth pointing out that sci-fi is about thought experiments. It's not supposed to be a totally accurate prediction of the future. It just needs to be internally consistent so that problems can't be solved by magic (something that is an occasional weakness of Star Trek). It's at its best when the scientific premise is laid out at the start, then no new science is introduced to solve the problem. Measure of a Man is a perfect example of that, whether you think the initial scenario likely or not.