This project was set out to explore the role of the Turing Test in the development of Artificial Intelligence (AI), with emphasis on the historical perspective. This report contains an introductory presentation of the Turing Test and Artificial Intelligence. Furthermore, it presents two methods for analysis. The first method is a quantitative search in extracting the number of results from Google Scholars for search range between 1950 and 2019. The searched terms are ‘Turing Test’ and ‘Artificial Intelligence’. The second method is the one used for the analysis of two case studies, ELIZA and Google Duplex. In exploring the historical development, ELIZA is an early research topic from 1966 and Google Duplex is a contemporary project from 2018. This report concludes that the Turing Test appears to have played a role in the historical development of AI. Results from the quantitative search show that there is an exponential growth, followed by a short stabilisation, before it begins to decay towards the last decade. Both case studies failed when subjected to a strict Turing Test. Though when subjected to the Total Turing Test, Google Duplex seems to surpass it. Finally, this report also concludes that the Turing Test may no longer be relevant, as mediums for AI have evolved beyond text-based and most developments are no longer concerned with tricking humans.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______278::096dac640c3b9053221350aefe9504b5&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______278::096dac640c3b9053221350aefe9504b5&type=result"></script>');
-->
</script>
In the paper we describe a new EU infrastructure project dedicated to lexicography. The project is part of the Horizon 2020 program, with a duration of four years (2018-2022). The result of the project will be an infrastructure which will (1) enable efficient access to high quality lexicographic data, and (2) bridge the gap between more advanced and less-resourced scholarly communities working on lexicographic resources. One of the main issues addressed by the project is the fact that current lexicographic resources have different levels of (incompatible) structuring, and are not equally suitable for application in in Natural Language Processing and other fields. The project will therefore develop strategies, tools and standards for extracting, structuring and linking lexicographic resources to enable their inclusion in Linked Open Data and the Semantic Web, as well as their use in the context of digital humanities.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=sygma_______::88d7ed72e6bc98592696806e7f8092f2&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=sygma_______::88d7ed72e6bc98592696806e7f8092f2&type=result"></script>');
-->
</script>
This project was set out to explore the role of the Turing Test in the development of Artificial Intelligence (AI), with emphasis on the historical perspective. This report contains an introductory presentation of the Turing Test and Artificial Intelligence. Furthermore, it presents two methods for analysis. The first method is a quantitative search in extracting the number of results from Google Scholars for search range between 1950 and 2019. The searched terms are ‘Turing Test’ and ‘Artificial Intelligence’. The second method is the one used for the analysis of two case studies, ELIZA and Google Duplex. In exploring the historical development, ELIZA is an early research topic from 1966 and Google Duplex is a contemporary project from 2018. This report concludes that the Turing Test appears to have played a role in the historical development of AI. Results from the quantitative search show that there is an exponential growth, followed by a short stabilisation, before it begins to decay towards the last decade. Both case studies failed when subjected to a strict Turing Test. Though when subjected to the Total Turing Test, Google Duplex seems to surpass it. Finally, this report also concludes that the Turing Test may no longer be relevant, as mediums for AI have evolved beyond text-based and most developments are no longer concerned with tricking humans.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______278::096dac640c3b9053221350aefe9504b5&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=od_______278::096dac640c3b9053221350aefe9504b5&type=result"></script>');
-->
</script>
In the paper we describe a new EU infrastructure project dedicated to lexicography. The project is part of the Horizon 2020 program, with a duration of four years (2018-2022). The result of the project will be an infrastructure which will (1) enable efficient access to high quality lexicographic data, and (2) bridge the gap between more advanced and less-resourced scholarly communities working on lexicographic resources. One of the main issues addressed by the project is the fact that current lexicographic resources have different levels of (incompatible) structuring, and are not equally suitable for application in in Natural Language Processing and other fields. The project will therefore develop strategies, tools and standards for extracting, structuring and linking lexicographic resources to enable their inclusion in Linked Open Data and the Semantic Web, as well as their use in the context of digital humanities.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=sygma_______::88d7ed72e6bc98592696806e7f8092f2&type=result"></script>');
-->
</script>
citations | 0 | |
popularity | Average | |
influence | Average | |
impulse | Average |
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=sygma_______::88d7ed72e6bc98592696806e7f8092f2&type=result"></script>');
-->
</script>