3月16日,更加强大的GPT-4.0横空出世。ChaptGPT作为一款有时代意义的产品,会不会取代人类?法律该如何规制人工智能技术的发展?我们分享了ChaptGPT官网发布的OpenAI宪章,GPT的创始者回答了这个问题,其全文如下,中文翻译供参考:
ChatGPT:OpenAI宪章(中英对照全文)我们的宪章描述了我们在执行OpenAI的使命时所遵循的原则。发布日期:2018年4月9日This document reflects the strategy we’ve refined over the past two years, including feedback from many people internal and external to OpenAI. The timeline to AGI remains uncertain, but our Charter will guide us in acting in the best interests of humanity throughout its development.该文件反映了过去两年来我们总结、提炼的战略,并吸纳了OpenAI内外部许多人士的反馈意见。实现AGI(通用人工智能)的时间表依然是不确定的,但是,在AGI发展中,我们的宪章都将始终指引我们以人类最大利益为行动准则。OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles:OpenAI的使命是确保AGI造福全人类,这里的AGI是指在大部分有经济价值的工作上表现比人类更出色的高度自主系统。我们会努力直接建立安全和有益的AGI,但是,如果我们的工作助力别人实现了这个目标,我们也将认为我们的使命实现了。为此,我们承诺遵守以下原则。Broadly distributed benefits
We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.我们承诺运用我们通过部署AGI获得的所有影响力,以确保AGI被用于造福全人类,并避免AI或者AGI的应用伤害人类或者造成权力过度集中。Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.我们首要的受托责任是对人类负责。我们预见需要整合大量资源来实现我们的使命,但是我们将保持勤勉行动使雇员与利益相关方之间的利益冲突最小化,这种冲突会损害广泛利益。Long-term safety
We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community.我们致力于开展确保AGI安全所需要的研究,并推动AI领域广泛采用这些研究。We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”我们担心后期AGI发展会演变为竞争比赛,以至于没有时间采取足够的安全预防措施。因此,如果一个价值观一致、有安全意识的项目先于我们将要建设AGI,我们承诺停止竞争,并协助该项目。我们将根据实际情况制定协议,但是一个典型的触发条件也许是“在接下来两年中成功机会大于五成”。Technical leadership
To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities—policy and safety advocacy alone would be insufficient.为了有效消除AGI对社会的负面影响,OpenAI必须处于AI技术能力最前沿,仅仅有政策和安全主张是不够的。We believe that AI will have broad societal impact before AGI, and we’ll strive to lead in those areas that are directly aligned with our mission and expertise.我们相信在AGI之前,AI将有广泛的社会影响,在与我们的使命和专长直接相关的领域,我们将努力处于领先地位。Cooperative orientation
We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges.We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.我们将积极地与其他研究和政策机构开展合作;我们设法创建一个全球社团组织,来共同应对AGI的全球性挑战。我们致力于提供公共物品,来帮助社会确定迈向AGI的道路。目前,这些公共物品包括公布我们大多数的研究,但是我们预计,在将来,安全方面的问题会减少我们传统上发布的研究成果,而更加重视分享安全、政策以及标准方面的研究。