Experts say an "artificial intelligence twin" could attend online meetings and video calls for you within several years.
They warn organisations need to urgently school up on the privacy risks in using this technology now.
Zoom chief executive Eric Zuan recently said that people will be able to send their digital twin to a work meeting so they can do other things, like go to the beach.
He told The Verge website the tech - a deepfake avatar which could look and speak like you in a meeting - could be ready in five or six years.
Auckland-based innovation consultant Ed Dever said he's using the beginnings of that technology now, with a tool called Supernormal.
"There have been times where I haven't been able to make a meeting, but I've got colleagues who are going to be in that meeting already and I can just send my AI to that meeting for me, collect notes, then read those notes afterwards and understand everything that happened, a lot more quickly then watching an entire video call played back."
The AI can't yet replicate Dever's image or participate, but it records, transcribes and provides summaries of the meeting in minutes - including picking out key ideas, tasks, or sales points when prompted.
He said sometimes the programme glitches, getting confused by different people's voices. But it allows him to forget notes and home in on what one person is saying.
"In the previous times, there were just things that would get missed.
"You weren't writing down every single thing a person said, verbatim, where now you can go back and if someone had an idea that was worded in a specific way, you can go back and look up exactly what they said and exactly how they said it and use that going forward."
A 2023 report from the Microsoft Work Trend Index said people were having three times as many online teams meetings and calls compared to 2020.
AI expert Andrew Chen said forget six years, AI digital twins could be developed to attend video calls within the next two.
The basis of it is already there in AI chatbots like ChatGPT and in deepfake image and video generators.
He said he's not looking forward to that world.
"We're social creatures, and part of our decision making is interacting with other people and social proof.
"If you are in an environment where an AI can go to a meeting for you, then probably you didn't need to go to that meeting in the first place if it can be automated away."
While not at digital twin level yet - Chen expects the current transcription tools to get more sophisticated and popular across sectors.
He said it could result in more casual work meetings being recorded, and vast screeds of personal and sensitive information being stored in the AI tool.
"You have to provide them with the audio, and you might not have control over what happens to that audio afterwards. Once they've transcribed it, they're going to keep a record of that text, you might not have much control over what happens to that text. It's all buried in the terms and conditions."
RNZ recently reported hundreds of GPs were using some AI note-taking programmes during consultations to ease workload.
Privacy Commissioner Michael Webster said workplaces must be aware of data leaking - and that was critical where sensitive information - like health information - was being discussed.
He gave the example of a construction worker revealing feelings of depression to their boss in a meeting being recorded and transcribed by AI.
"Our bottom line is don't input into a generative AI tool personal, confidential information unless you're absolutely sure that input information is not going to be retained or disclosed by the AI tool."
Webster said while this technology is rapidly developing, organisations must do a full privacy assessment before they use it.
He reiterated his call for tougher fines for those who breach the Privacy Act - currently the maximum penalty under the Privacy Act is a fine not exceeding $10,000, and said in Australia fines can go up to $50 million "reflecting the digital age we live in".
He said if people were attending a GP consultation or meeting that was being transcribed by AI, they could ask if the notes were being checked for accuracy, or how they would be stored.
"The key issue for me - from a privacy perspective - is that people are aware of what's being done with their personal information, where it's being stored, whether it's being stored securely, and whether it's being protected from inappropriate access or use."