Would you let AI read your child a bedtime story? With time as a precious commodity, technology is helping us organise and automate many day-to-day tasks. But are we replacing and outsourcing too many of our ‘human’ activities?
These are questions that were recently sparked by BookTrust, the UK reading charity, who conducted a survey in the run up to this week’s Pyjamarama bedtime story fundraising event. It found around 26% of parents use AI home assistants to read bedtime stories to their children. There were understandably shocked reactions to this statistic, with many fearing we are risking our emotional health and ability to connect with one another in favour of the convenience of technological devices. It appears we may be missing the point – using tech to do the very things we should be using tech to give us more time to do ourselves. The simple pleasure of reading a story to our children; a time for conversation and creativity, should be treated as sacred.
Then again, there have been moral panics at all stages of technological development; the fear of displacement is very much a human trait. Some argue that Alexa & co are no different to letting the likes of CBBC Bedtime Stories do the job; storytime was also on the radio before TV came into the picture.
The fact remains that our children will grow up with technology in a way we never did. We run the risk of creating a disconnect between generations if we do not adopt and keep up to some level. So, we might view this time as early stages of the learning process. It’s possible that the high percentage of those using Alexa and friends as bedtime story readers is less of a pattern and more down to initial novelty – parents and children trying it out for fun and to satisfy curiosity and get to know the limits of the technology. And as with our use of social media, we have to make mistakes in order to realise what constitutes good and bad usage for ourselves, and with time we are likely to grow with AI and apply it in more sensible ways.
We have also always had to make decisions about assistance and outsourcing in our lives – from childcare to household maintenance – mostly to other humans. That’s arguably the key difference in what we are working with now: the involvement of human beings is being reduced and we have less control and understanding of the motives and ‘thought’ process and action of algorithm driven AI.
With other humans we choose who fits well within our own outlook and aims in life. When it comes to supporting our children’s development in particular, having a good trust relationship is key – even if that relationship is with a particular TV channel or show. With AI, we do not yet, and probably may not ever be able to fully trust AI decisions due to the very nature of their design, particularly as they are made by large corporations who ultimately need to sell and promote things to fund their output. As one journalist and parent writing for the New York Times put it: ‘Alexa, after all, is not “Alexa.” She’s a corporate algorithm in a black box‘. Even avid users do not fully trust home assistants, and a still significant proportion of others refuse to have devices listening to their lives at all.
AI and division of labour
Just like in law and business, we need to remember that AI is our tool; not something we are beholden to, and we should divide labour between it and ourselves accordingly. By using AI to outsource the repetitive mundane time consuming tasks that distract and take time away from bigger picture, we are left with more headspace for emotional and creative thought processes that are vital for progression and satisfaction in our roles.
We can see that when applied well, AI can provide positive enhancements to all important emotional connections. For example, in senior care AI and robots are helping to reduce time staff and family spend on monitoring health and prompting to take medication etc and allow them to engage on a more individual emotional level, help them to be more independent with voice activated actions etc. There is also evidence that an increase in the use of voice activated AI leads to a decrease in smartphone use, so at least we are lifting our eyes from the screen a little more. Source: Two thirds of people who use digital voice assistants like the Amazon Echo or Google Home use their smartphones less often, according to an Accenture survey.
We also need to continue to educate ourselves on its flaws and limitations. For example, as the technology currently stands, there are question around inclusivity and gender roles – many voice assistants are female-voiced, and have been found to be unable to recognise certain accents, facial features and speech impediments.
Regulation will increasingly play a role to address these issues in both the domestic and business context: The UK’s Centre for Data Ethics and Innovation (CDEI) recently announced it would investigate, amongst other things, algorithmic bias in decision-making. Transparency and legal compliance will help build trust. But it is up to us as users to ensure that we regulate our usage to best fit our lives and use the time we are given wisely – for storytime and more.