Director Karen X. Cheng just posted a cool video where she uses OpenAI’s DALL-E to generate different outfits and then applies them to a video of her walking down the street. DALL-E is designed for images, not video, so after generating the individual key frames she used the (currently free) program EbSynth to map those keyframes to the video and then DAIN to smooth it out.
She has more interesting experiments with DALL-E, AR and video processing over at her Instagram. (h/t to Boing Boing)