Yes! I have a feeling that as you define the goal of "something that has human-like intelligence but isn't human" more clearly, the less sense it will make because it will become clear that our idea of intelligence depends heavily on the specific ways that humans relate to the world. I wrote a blog post on this - https://gushogg-blake.com/posts/agi