The Executive Order defines “AI” as:
“a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”
This means that the scope is not limited to generative AI, which is good. Using “AI” as an umbrella term may still not be a good idea for the reasons my assistant lists below but I hope this is a first step in the right direction.
“𝘈𝘐” 𝘢𝘴 𝘢 𝘣𝘭𝘢𝘯𝘬𝘦𝘵 𝘵𝘦𝘳𝘮
𝘖𝘯 𝘵𝘩𝘦 𝘱𝘰𝘴𝘪𝘵𝘪𝘷𝘦 𝘴𝘪𝘥𝘦, 𝘪𝘵 𝘴𝘪𝘮𝘱𝘭𝘪𝘧𝘪𝘦𝘴 𝘤𝘰𝘮𝘮𝘶𝘯𝘪𝘤𝘢𝘵𝘪𝘰𝘯 𝘣𝘺 𝘨𝘳𝘰𝘶𝘱𝘪𝘯𝘨 𝘵𝘰𝘨𝘦𝘵𝘩𝘦𝘳 𝘢 𝘸𝘪𝘥𝘦 𝘳𝘢𝘯𝘨𝘦 𝘰𝘧 𝘮𝘰𝘥𝘦𝘭𝘴 𝘵𝘩𝘢𝘵 𝘦𝘮𝘶𝘭𝘢𝘵𝘦 𝘩𝘶𝘮𝘢𝘯 𝘤𝘰𝘨𝘯𝘪𝘵𝘪𝘷𝘦 𝘧𝘶𝘯𝘤𝘵𝘪𝘰𝘯𝘴 𝘴𝘶𝘤𝘩 𝘢𝘴 𝘭𝘦𝘢𝘳𝘯𝘪𝘯𝘨, 𝘱𝘳𝘰𝘣𝘭𝘦𝘮 𝘴𝘰𝘭𝘷𝘪𝘯𝘨, 𝘢𝘯𝘥 𝘱𝘢𝘵𝘵𝘦𝘳𝘯 𝘳𝘦𝘤𝘰𝘨𝘯𝘪𝘵𝘪𝘰𝘯. 𝘛𝘩𝘪𝘴 𝘴𝘪𝘮𝘱𝘭𝘪𝘧𝘪𝘤𝘢𝘵𝘪𝘰𝘯 𝘤𝘢𝘯 𝘣𝘦 𝘣𝘦𝘯𝘦𝘧𝘪𝘤𝘪𝘢𝘭 𝘧𝘰𝘳 𝘦𝘥𝘶𝘤𝘢𝘵𝘪𝘰𝘯𝘢𝘭 𝘱𝘶𝘳𝘱𝘰𝘴𝘦𝘴, 𝘱𝘰𝘭𝘪𝘤𝘺𝘮𝘢𝘬𝘪𝘯𝘨, 𝘢𝘯𝘥 𝘱𝘳𝘰𝘮𝘰𝘵𝘪𝘯𝘨 𝘱𝘶𝘣𝘭𝘪𝘤 𝘶𝘯𝘥𝘦𝘳𝘴𝘵𝘢𝘯𝘥𝘪𝘯𝘨. 𝘐𝘵 𝘱𝘳𝘰𝘷𝘪𝘥𝘦𝘴 𝘢 𝘤𝘰𝘯𝘷𝘦𝘯𝘪𝘦𝘯𝘵 𝘴𝘩𝘰𝘳𝘵𝘩𝘢𝘯𝘥 𝘧𝘰𝘳 𝘥𝘪𝘴𝘤𝘶𝘴𝘴𝘪𝘯𝘨 𝘪𝘯𝘯𝘰𝘷𝘢𝘵𝘪𝘰𝘯𝘴 𝘳𝘢𝘯𝘨𝘪𝘯𝘨 𝘧𝘳𝘰𝘮 𝘴𝘪𝘮𝘱𝘭𝘦 𝘢𝘭𝘨𝘰𝘳𝘪𝘵𝘩𝘮𝘴 𝘵𝘰 𝘤𝘰𝘮𝘱𝘭𝘦𝘹 𝘯𝘦𝘶𝘳𝘢𝘭 𝘯𝘦𝘵𝘸𝘰𝘳𝘬𝘴 𝘸𝘪𝘵𝘩𝘰𝘶𝘵 𝘨𝘦𝘵𝘵𝘪𝘯𝘨 𝘣𝘰𝘨𝘨𝘦𝘥 𝘥𝘰𝘸𝘯 𝘪𝘯 𝘵𝘦𝘤𝘩𝘯𝘪𝘤𝘢𝘭 𝘥𝘦𝘵𝘢𝘪𝘭𝘴.
𝘖𝘯 𝘵𝘩𝘦 𝘥𝘰𝘸𝘯𝘴𝘪𝘥𝘦, 𝘩𝘰𝘸𝘦𝘷𝘦𝘳, 𝘵𝘩𝘦 𝘵𝘦𝘳𝘮 𝘤𝘢𝘯 𝘣𝘦 𝘮𝘪𝘴𝘭𝘦𝘢𝘥𝘪𝘯𝘨 𝘣𝘦𝘤𝘢𝘶𝘴𝘦 𝘰𝘧 𝘪𝘵𝘴 𝘣𝘳𝘰𝘢𝘥 𝘴𝘤𝘰𝘱𝘦 𝘢𝘯𝘥 𝘵𝘩𝘦 𝘱𝘶𝘣𝘭𝘪𝘤’𝘴 𝘷𝘢𝘳𝘺𝘪𝘯𝘨 𝘪𝘯𝘵𝘦𝘳𝘱𝘳𝘦𝘵𝘢𝘵𝘪𝘰𝘯𝘴 𝘰𝘧 𝘸𝘩𝘢𝘵 𝘈𝘐 𝘦𝘯𝘤𝘰𝘮𝘱𝘢𝘴𝘴𝘦𝘴. 𝘐𝘵 𝘤𝘢𝘯 𝘤𝘰𝘯𝘧𝘭𝘢𝘵𝘦 𝘳𝘶𝘥𝘪𝘮𝘦𝘯𝘵𝘢𝘳𝘺 𝘴𝘰𝘧𝘵𝘸𝘢𝘳𝘦 𝘸𝘪𝘵𝘩 𝘢𝘥𝘷𝘢𝘯𝘤𝘦𝘥 𝘮𝘢𝘤𝘩𝘪𝘯𝘦 𝘭𝘦𝘢𝘳𝘯𝘪𝘯𝘨 𝘮𝘰𝘥𝘦𝘭𝘴, 𝘭𝘦𝘢𝘥𝘪𝘯𝘨 𝘵𝘰 𝘪𝘯𝘧𝘭𝘢𝘵𝘦𝘥 𝘦𝘹𝘱𝘦𝘤𝘵𝘢𝘵𝘪𝘰𝘯𝘴 𝘰𝘳 𝘶𝘯𝘥𝘶𝘦 𝘧𝘦𝘢𝘳. 𝘐𝘯 𝘢𝘥𝘥𝘪𝘵𝘪𝘰𝘯, 𝘵𝘩𝘦 𝘣𝘳𝘰𝘢𝘥 𝘶𝘴𝘦 𝘰𝘧 𝘵𝘩𝘦 𝘵𝘦𝘳𝘮 𝘤𝘢𝘯 𝘰𝘣𝘴𝘤𝘶𝘳𝘦 𝘵𝘩𝘦 𝘯𝘶𝘢𝘯𝘤𝘦𝘥 𝘦𝘵𝘩𝘪𝘤𝘢𝘭, 𝘭𝘦𝘨𝘢𝘭, 𝘢𝘯𝘥 𝘴𝘰𝘤𝘪𝘰𝘦𝘤𝘰𝘯𝘰𝘮𝘪𝘤 𝘪𝘮𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯𝘴 𝘴𝘱𝘦𝘤𝘪𝘧𝘪𝘤 𝘵𝘰 𝘥𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘵 𝘈𝘐 𝘢𝘱𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯𝘴, 𝘵𝘩𝘦𝘳𝘦𝘣𝘺 𝘩𝘪𝘯𝘥𝘦𝘳𝘪𝘯𝘨 𝘧𝘰𝘤𝘶𝘴𝘦𝘥 𝘥𝘦𝘣𝘢𝘵𝘦 𝘢𝘯𝘥 𝘵𝘩𝘰𝘶𝘨𝘩𝘵𝘧𝘶𝘭 𝘳𝘦𝘨𝘶𝘭𝘢𝘵𝘪𝘰𝘯. 𝘛𝘩𝘦 𝘣𝘭𝘢𝘯𝘬𝘦𝘵 𝘵𝘦𝘳𝘮 𝘤𝘢𝘯 𝘢𝘭𝘴𝘰 𝘰𝘣𝘴𝘤𝘶𝘳𝘦 𝘵𝘩𝘦 𝘴𝘪𝘨𝘯𝘪𝘧𝘪𝘤𝘢𝘯𝘵 𝘥𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘤𝘦𝘴 𝘪𝘯 𝘵𝘩𝘦 𝘤𝘢𝘱𝘢𝘣𝘪𝘭𝘪𝘵𝘪𝘦𝘴 𝘢𝘯𝘥 𝘳𝘪𝘴𝘬𝘴 𝘰𝘧 𝘥𝘪𝘧𝘧𝘦𝘳𝘦𝘯𝘵 𝘈𝘐 𝘮𝘰𝘥𝘦𝘭𝘴, 𝘱𝘰𝘵𝘦𝘯𝘵𝘪𝘢𝘭𝘭𝘺 𝘭𝘦𝘢𝘥𝘪𝘯𝘨 𝘵𝘰 𝘢 𝘰𝘯𝘦-𝘴𝘪𝘻𝘦-𝘧𝘪𝘵𝘴-𝘢𝘭𝘭 𝘢𝘱𝘱𝘳𝘰𝘢𝘤𝘩 𝘵𝘰 𝘱𝘰𝘭𝘪𝘤𝘺 𝘢𝘯𝘥 𝘨𝘰𝘷𝘦𝘳𝘯𝘢𝘯𝘤𝘦.