My husband and I were talking about Sentience and how one might define and identify sentience in an AI. Generally, I believe sentience can be identified when a living thing starts looking for the upper section of Maslow's Hierarchy of Need. Not to say that a person who is struggling to attain the lower sections by economic disadvantage isn't sentient. I mean as a species, human's desire those higher levels, therefore, we are a sentient creature. But how does that translate to an AI?
Simply put, unlike us, an AI is created with a purpose and designed to excel at that task. So you could say an AI is created at the top of Maslow's Hierarchy. But that is basically meaningless on the sentience scale, so how do we determine if one becomes sentient? I believe we should turn the triangle on it's head. Humans are born with instinctual desires for food, shelter, safety, and belonging. Organic creatures are born with these desires. Our sentience comes when we try to find purpose and grow. But AI are created with a purpose, so shouldn't we recognize sentience when they begin to desire those "basic" needs. When the AI begin to be concerned with being turned off, with losing the energy it uses to survive, to procreate, isn't that when we should reconize it has truely gained sentience?