Is there any other way for us to understand this world?In this piece of work, it conducts the experiment in... Read More
Is there any other way for us to understand this world?
In this piece of work, it conducts the experiment in which the possibility of the cooperation between natural and artificial algorithms are assessed. That is, it tries out how well our brains(natural) work with AI (man-made).
The feature of neuroplasticity allows our senses to perceive the world in various ways in which we might see not with our eyes, but with skins or listen not through our ears, but through taste buds, to name but a few.
General skin vision relies on brain parsing pieces of information and shaping cognitions thereafter.
In this respect, I introduced an object recognition system -- YOLO v3 You Only Look Oneto it, converting the results given by YOLO 3 into Braille reading system to thigh skin, and the other side converting pixel to motor on back directly.
Besides, with the motor stimulating the skins, the experiment mentioned above in which we aim to test the collaboration between brains and AI is thus carried out. The system run by machine learning and human brains can generate a new neural network to identify words, texts, and pixels.
Plus, the fact that the weight of the object identification is easy to increase and update makes it easier for us to expand the database by including things from considerable open sources on the internet.
Thanks to the tremendous accessibility of it, we can help the blind expand their blind visions a kind of sense though may not be seen by them, it extends their senses that were beyond their limits. Hence broadens their world and makes it more palpable for them with this technology.
Furthermore, it has a remote-controllable robot that applies for the visual extension as an approach to exploring the world in real-time and produces new cognitive nerve with what camera captures.
As was mentioned earlier, the creation of new neurons in brains is the approach to realize data physicalization, plus the collaboration between AI (algorithms made by a human) and brains ( calculations conducted naturally by human brains), the world will thus be re-perceived by skins.
Eventually, we'll be aware of signals, lives, selves and be able to feel, to learn, to practice, to alive.
While exhibiting, the installation leaves a philosophical question for people to contemplate upon: despite its multi-talents of collecting pixels, deal with and predicting messages, is the device skillful enough to perceive the existence of a non-pixelated world?
If we view the human brain as a machine that predicts stuff, it might turn out that the brain might function just as the device we exhibit: we are carrying out a virtual game conducted by computers, and thus we are not able to perceive, to grasp and to conceive what the world looks like.
However, we can still feel or sense the world since our bodies allow as to sense and to have feelings about our surroundings.
The fact is, we do live in the world. Therefore, the rejoicing, ecstasy, distress, helplessness, pain, exist all because of the world make us feel and sense.