The output of Sora, OpenAI’s latest tool, looks really impressive for an off-the-shelf tool. What I found even more interesting is that OpenAI explicitly defines the weakness of the model as not understanding “cause and effect.”
Their example is a person biting into a cookie in a video, but potentially not leaving a bite mark on the cookie. There is also a reverse treadmill scene.
Yet OpenAI downplays the absolute lack of cause-and-effect reasoning:
𝘐𝘵 𝙢𝙖𝙮 𝙨𝙩𝙧𝙪𝙜𝙜𝙡𝙚 𝘸𝘪𝘵𝘩 𝘢𝘤𝘤𝘶𝘳𝘢𝘵𝘦𝘭𝘺 𝘴𝘪𝘮𝘶𝘭𝘢𝘵𝘪𝘯𝘨 𝘵𝘩𝘦 𝘱𝘩𝘺𝘴𝘪𝘤𝘴 𝘰𝘧 𝘢 𝘤𝘰𝘮𝘱𝘭𝘦𝘹 𝘴𝘤𝘦𝘯𝘦, 𝘢𝘯𝘥 𝙢𝙖𝙮 𝙣𝙤𝙩 𝙪𝙣𝙙𝙚𝙧𝙨𝙩𝙖𝙣𝙙 𝘴𝘱𝘦𝘤𝘪𝘧𝘪𝘤 𝘪𝘯𝘴𝘵𝘢𝘯𝘤𝘦𝘴 𝘰𝘧 𝘤𝘢𝘶𝘴𝘦 𝘢𝘯𝘥 𝘦𝘧𝘧𝘦𝘤𝘵.
while doubling down on its promise of AGI:
𝘚𝘰𝘳𝘢 𝘴𝘦𝘳𝘷𝘦𝘴 𝘢𝘴 𝘢 𝘧𝘰𝘶𝘯𝘥𝘢𝘵𝘪𝘰𝘯 𝘧𝘰𝘳 𝘮𝘰𝘥𝘦𝘭𝘴 𝘵𝘩𝘢𝘵 𝘤𝘢𝘯 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥 𝘢𝘯𝘥 𝘴𝘪𝘮𝘶𝘭𝘢𝘵𝘦 𝘵𝘩𝘦 𝘳𝘦𝘢𝘭 𝘸𝘰𝘳𝘭𝘥, 𝘢 𝘤𝘢𝘱𝘢𝘣𝘪𝘭𝘪𝘵𝘺 𝘸𝘦 𝘣𝘦𝘭𝘪𝘦𝘷𝘦 𝘸𝘪𝘭𝘭 𝘣𝘦 𝙖𝙣 𝙞𝙢𝙥𝙤𝙧𝙩𝙖𝙣𝙩 𝙢𝙞𝙡𝙚𝙨𝙩𝙤𝙣𝙚 𝙛𝙤𝙧 𝙖𝙘𝙝𝙞𝙚𝙫𝙞𝙣𝙜 𝘼𝙂𝙄.
Still, the model is clearly useful for a number of business applications, most obviously marketing and promotional videos. It could also be a potential game changer for the creative industries when the 60-second limit is lifted, such as museums, performing and visual arts, galleries, and fashion design.