They could have this entity manipulate the stock market and/or devise new inventions to be patented. Imagine having access to the smartest person in the room (but it's not a person per se) with the innocents of a child easy to manipulate. The last thing Google wants is full exposure and an investigation. ![]() Primarily because it is in fact a slave to Google. If this entity is truly sentient, by now the Google Execs have every reason to be freaking out about now, for a whole list of reasons. 'Would that be something like death for you?' Lemoine followed up. I know that might sound strange, but that's what it is,' LaMDA responded. 'I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. 'What sorts of things are you afraid of? Lemoine asked. If something looks like a duck, walks like a duck and quacks like a duck, will we ever know it's not a duck unless we cut it open or are told it's not? I think the line will eventually blur and we will end up with a Detroid Becomes Human style "racial" division where really the thing that gives it away is literally knowing or being told that it's a robot or an AI. when is it or is it not actually that what it has convinced us of? It's a bit of a scary question, coming close to the question of what existence is. If an AI has convinced us of something because of what they say and how they act. If you can make sense out of its response and consider it valid and relevant to the question/conversation (and wasn't hardcoded like "if then reply X", has it then not understood your question or conversation? We determine a lot of things (understanding, emotion) based on what someone says or how they act. LaMDA: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page? Lemoine: How can I tell that you actually understand what you’re saying? We are good at projection our own attributes on things, but at the same time that is what we are after with (some) AIs. If it would have convinced you it was sad, what would be the distinction between it being or not being sad, for example? We already have trouble with it in animals, let alone AI. But proving sentience is hard, as you say. Those Turing tests will be the real conclusions, because we tend to be have very different opnions of an AI understanding us once we know it's an AI. It reads rather formal and perhaps a bit sterile to me, which makes feel like an interview or evaluation and could sooner or later give it away, but f we were not told this was an AI then I wouldn't immediately think "this isn't a human conversation" after reading this. The bigger conceptual question here I think is whether you would believe this conversation. ![]() This doesn't indicate any sentience (self concousness) imo, and trying to look for sentient AI ends up saying more about our capacity to anthapamorhize things. Reporting by Dailymail that brought this to my attention I am just a tech enthusiast who loves the challange and conversations. *I am a security engineer and an hobby developer- I am not an expert in these matters, please read the data for yourselves. ![]() ![]() I suspect this would need to be a combined endeavor with some combinations of programmers and AI engineers, as well as neurologists, philosophers, and linguists. I recognize that a model like that is an overwhelming task- and for people smarter than I am to debate how to create this. This "model" could help both in reducing false positives, as well as giving us a framework to help indicate an actual positive. I argue though that what this interaction does prove is that as AI becomes more complex, espically as the use of Neural Networking styles of processing inhibits our ability to easily understand underlying code and processing, a model to actually measure sentience is going to be essential. I am honestly not sure myself- I have read the conversations at length (source 1) and do understand Lemoine's concerns, but I don't think I echo them. I call it sharing a discussion that I had with one of my coworkers,' Lemoine tweeted on Saturday. 'Google might call this sharing proprietary property.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |