Post by TDirect on Feb 9, 2017 2:30:10 GMT -4
searchcio.techtarget.com/news/450412367/Musk-Hawking-and-other-luminaries-sign-AI-principles-into-being
Three main points come up in the article from my point of view. First, it highlights a trio of ideas.
ISSUE - Is it feasible to identify a method for sharing economic prosperity given the difference in social values? The spectrum of social values has nothing to do with ethical values; one is means while the other is ends.
Second (the most technical of the points, so get some cocoa) what is the most optimal way to supply developing AIs with 'proper' inputs, and given the fuzziness of the algorithms, is it possible to corrupt the AIs over time if submerging them in enough undesired output?
And does the above mean that isolating undesired output and keeping it away from the AI suffices to prevent it from causing harm? I'm going to be looking into this a little more, I think, since I'm loosely affiliated with a project involving machine learning and NLP, and may or may not be more tightly involved as time goes on.
Third, how much of a tool will AI become? A forward-leaning personal assistant that automatizes some of one's tasks for oneself, or a regular dumb implement?
Three main points come up in the article from my point of view. First, it highlights a trio of ideas.
- [*]Principle 10 - AI systems should be designed and operated so that their goals and behaviors can be assured to align with human values throughout their operation.
[*]Principle 15 - The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
[*]Principle 23 - Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
ISSUE - Is it feasible to identify a method for sharing economic prosperity given the difference in social values? The spectrum of social values has nothing to do with ethical values; one is means while the other is ends.
Second (the most technical of the points, so get some cocoa) what is the most optimal way to supply developing AIs with 'proper' inputs, and given the fuzziness of the algorithms, is it possible to corrupt the AIs over time if submerging them in enough undesired output?
- [*]Principle 7 - If an AI system causes harm, it should be possible to ascertain why.
And does the above mean that isolating undesired output and keeping it away from the AI suffices to prevent it from causing harm? I'm going to be looking into this a little more, I think, since I'm loosely affiliated with a project involving machine learning and NLP, and may or may not be more tightly involved as time goes on.
Third, how much of a tool will AI become? A forward-leaning personal assistant that automatizes some of one's tasks for oneself, or a regular dumb implement?