THE CONCEPT OF KNOWLEDGE AND COGNITION IN ARTIFICIAL INTELLIGENCE MODELS – A PHILOSOPHICAL PERSPECTIVE
Artificial intelligence is a technology that has revolutionized numerous fields over recent decades. Its applications span scientific and engineering domains as well as everyday experiences, such as image recognition, autonomous vehicles, language translation, and medical diagnostic systems. AI systems, particularly those based on deep learning, exhibit capabilities comparable to human performance and, in some areas, even surpass it. This raises fundamental questions about their epistemic nature. Does AI genuinely generate knowledge? If so, what is the nature of this knowledge? Should we view its achievements as a form of novel cognition or merely as advanced processing of pre-existing data? A key issue thus emerges: do AI systems fulfill the criteria traditionally associated with human cognition, such as the capacity for justification, awareness of cognitive processes, or the creation of new epistemic content? This paper aims to address these questions by focusing on philosophical conceptions of knowledge and analyzing whether AI can be regarded as an autonomous cognitive agent. These considerations provide a perspective on AI not merely as a technology augmenting human cognitive processes but also as a potential step towards redefining knowledge in the technological era.
- Book Chapter
- 10.4018/978-1-60960-551-3.ch003
- Jan 1, 2011
There is a significant increase in the use of biomedical images in clinical medicine, disease research, and education. While the literature lists several successful methods that were developed and implemented for content-based image retrieval and recognition, they have been unable to make significant inroads in biomedical image recognition domain. The use of computer-aided diagnosis has been increasing. It is based on descriptors extraction and classification approaches. This interest is due to the need for specialized methods, which are specific to each biomedical image type, and also due to the lack of advances in image recognition systems. In this chapter, the authors present intelligent information description techniques and the most used classification methods in an image retrieval and recognition system. A multicriteria classification method applied for sickle cells disease image databases is given. The recognition performance system is illustrated and discussed.
- Conference Article
- 10.1109/sii.2013.6776687
- Dec 1, 2013
In general, a multifunctional image recognition system usually requires a lot of significant resources such as numerous memory components, rapid operation capability of CPU/GPU, image capture device or camera, and display devices/interfaces etc. Most of PC-based image recognition systems can handle complicated image processing. However, for some specific applications, the user will prefer to choose the easier and specific image recognition systems. It is not economic and efficient to do some easy works but using a lot of expensive resources. Therefore, developing a standalone microcontroller based image recognition system is a better choice for some specific applications. An assistive robot for the disabled with spinal cord injury was implemented in this research. The assistive robot system with the functions of real-time image recognition can deliver a small assistive pacifier (a limit switch inside) to the mouth of the disabled automatically. The disabled can control many things alone by using the assistive pacifier. The real-time image recognition subsystem of the automatic assistive robot is composed of two major parts: one is digital logic circuits built in FPGA module, the other is the microcontroller 8051. The digital logic circuits in FPGA module are designed to deal with real-time image processing. The microcontroller 8051 can be programmed to communicate with FPGA module and control the assistive robot to finish the task automatically. This essay also reveals a programmable color-tone table to make the image recognition system suitable for different environments with different light conditions by modifying color-tone tables. The successful image recognition rate of the experiments for verifying the performance of the assistive robot is over 93%.
- Research Article
- 10.52554/kjcl.2024.109.447
- Dec 31, 2024
- The Korean Association of Civil Law
Faced with the development of the artificial intelligence-based technologies, do we need to modify our civil liability (tort liability) system or compensation system in general? Although an AI system has several features compared to other technologies, it is its “autonomy” which poses the essential question whether a harm caused by its output can be attributed to a human. The aim of this paper is to overview and to analyze the current theoretical situation regarding this topic in Japan. Just like in other countries, the adaptability of the existing civil liability regime to the accidents caused by autonomous vehicles is considered as a question to be solved as immediately as possible in Japan. Although the liability regime established by the Act on Securing Compensation for Automobile Accidents in 1955 will work acceptably well for the SAE Level 4 (High Driving Automation) vehicles, there are worries that it won’t for the SAE Level 5 (Full Driving Automation) vehicles because of the eventual disappearance of the person responsible according to this regime. This is why the Japanese government and law scholars have started to discuss the alternative regimes aiming to reinforce the civil liability of manufactures of autonomous vehicles such as renewing the compulsory liability insurance, elaborating an independent liability regime for autonomous vehicles, creating a compensation fund, etc. How about AI systems in general? The modernization of the common civil liability regimes will be inevitable: the traditional fault liability regime should be reinforced by introducing the general measures of the presumption of fault and/or causality; the existing product liability regime established by the Product Liability Act in 1994 should be modified to be adapted to the digitalization. However, these solutions might be nothing more than stopgap measures, considering the inherent limits of these regimes (necessity to identify a “misconduct” of a human using an AI system, difficulty in defining a “defect” of an AI system, etc.). Three new regimes are to be further discussed: no fault liability focusing on the uncontrollability of autonomous AI systems, vicarious liability regarding AI systems as auxiliaries of humans, and compensation fund intended to indemnify victims of the outputs of AI systems. Each regime poses a lot of theoretical questions to which the current Japanese civil liability doctrine is not yet ready to answer satisfactorily.
- Research Article
22
- 10.1016/j.eswa.2012.08.059
- Sep 5, 2012
- Expert Systems with Applications
Increasing adaptability of a speech into sign language translation system
- Book Chapter
8
- 10.1007/978-3-030-50267-6_5
- Jan 1, 2020
The article explores the effects increasing automation has on our conceptions of human agency. We conceptualize the central features of human agency as ableness, intentionality, and rationality and define responsibility as a central feature of moral agency. We discuss suggestions in favor of holding AI systems moral agents for their functions but join those who refute this view. We consider the possibility of assigning moral agency to automated AI systems in settings of machine-human cooperation but come to the conclusion that AI systems are not genuine participants in joint action and cannot be held morally responsible. Philosophical issues notwithstanding, the functions of AI systems change human agency as they affect our goal setting and pursuing by influencing our conceptions of the attainable. Recommendation algorithms on news sites, social media platforms, and in search engines modify our possibilities to receive accurate and comprehensive information, hence influencing our decision making. Sophisticated AI systems replace human workforce even in such demanding fields as medical surgery, language translation, visual arts, and composing music. Being second to a machine in an increasing number of fields of expertise will affect how human beings regard their own abilities. We need a deeper understanding of how technological progress takes place and how it is intertwined with economic and political realities. Moral responsibility remains a human characteristic. It is our duty to develop AI to serve morally good ends and purposes. Protecting and strengthening the conditions of human agency in any AI environment is part of this task.
- Research Article
- 10.33108/visnyk_tntu2025.01.062
- Jan 1, 2025
- Scientific journal of the Ternopil national technical university
In the era of data technologies for medical diagnostic cognitive software systems, new informative data has been obtained based on topological data analysis in the form of Betti numbers. These new, more informative data can be applied to medical diagnostic cognitive software systems and obtain a higher accuracy in the diagnosis of neurodegenerative diseases, which is extremely important, since the choice of a patient's treatment protocol depends on their accuracy. The higher accuracy of the functioning of medical diagnostic cognitive software systems is achieved due to the fact that new informative data are topological data, which in their values take into account the nature of the topological structure of experimentally measured data in the form of electroencephalographic (EEG) signals characterizing the activity of the patient's brain. On the basis of experimental data - EEG signals and methods of data science - topological data analysis, new more informative topological data were obtained for the development of high-precision medical diagnostic cognitive software systems in neurology. The scientific approach is based on the methods and analytical techniques of algebraic topology, in particular, the theory of categories and simplicial geometry (simplicial complexes). In particular, topological data – Betti numbers, obtained on the basis of topological analysis of data on experimentally measured EEG signals of the human brain, represent the number of simplexes with holes of different dimensions of the Vietoris-Rips simplex complex.
- Research Article
16
- 10.1162/daed_e_01897
- May 1, 2022
- Daedalus
This dialogue is from an early scene in the 2014 film Ex Machina, in which Nathan has invited Caleb to determine whether Nathan has succeeded in creating artificial intelligence.1 The achievement of powerful artificial general intelligence has long held a grip on our imagination not only for its exciting as well as worrisome possibilities, but also for its suggestion of a new, uncharted era for humanity. In opening his 2021 BBC Reith Lectures, titled "Living with Artificial Intelligence," Stuart Russell states that "the eventual emergence of general-purpose artificial intelligence [will be] the biggest event in human history."2Over the last decade, a rapid succession of impressive results has brought wider public attention to the possibilities of powerful artificial intelligence. In machine vision, researchers demonstrated systems that could recognize objects as well as, if not better than, humans in some situations. Then came the games. Complex games of strategy have long been associated with superior intelligence, and so when AI systems beat the best human players at chess, Atari games, Go, shogi, StarCraft, and Dota, the world took notice. It was not just that Als beat humans (although that was astounding when it first happened), but the escalating progression of how they did it: initially by learning from expert human play, then from self-play, then by teaching themselves the principles of the games from the ground up, eventually yielding single systems that could learn, play, and win at several structurally different games, hinting at the possibility of generally intelligent systems.3Speech recognition and natural language processing have also seen rapid and headline-grabbing advances. Most impressive has been the emergence recently of large language models capable of generating human-like outputs. Progress in language is of particular significance given the role language has always played in human notions of intelligence, reasoning, and understanding. While the advances mentioned thus far may seem abstract, those in driverless cars and robots have been more tangible given their embodied and often biomorphic forms. Demonstrations of such embodied systems exhibiting increasingly complex and autonomous behaviors in our physical world have captured public attention.Also in the headlines have been results in various branches of science in which AI and its related techniques have been used as tools to advance research from materials and environmental sciences to high energy physics and astronomy.4 A few highlights, such as the spectacular results on the fifty-year-old protein-folding problem by AlphaFold, suggest the possibility that AI could soon help tackle science's hardest problems, such as in health and the life sciences.5While the headlines tend to feature results and demonstrations of a future to come, AI and its associated technologies are already here and pervade our daily lives more than many realize. Examples include recommendation systems, search, language translators - now covering more than one hundred languages - facial recognition, speech to text (and back), digital assistants, chatbots for customer service, fraud detection, decision support systems, energy management systems, and tools for scientific research, to name a few. In all these examples and others, AI-related techniques have become components of other software and hardware systems as methods for learning from and incorporating messy real-world inputs into inferences, predictions, and, in some cases, actions. As director of the Future of Humanity Institute at the University of Oxford, Nick Bostrom noted back in 2006, "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."6As the scope, use, and usefulness of these systems have grown for individual users, researchers in various fields, companies and other types of organizations, and governments, so too have concerns when the systems have not worked well (such as bias in facial recognition systems), or have been misused (as in deepfakes), or have resulted in harms to some (in predicting crime, for example), or have been associated with accidents (such as fatalities from self-driving cars).7Dædalus last devoted a volume to the topic of artificial intelligence in 1988, with contributions from several of the founders of the field, among others. Much of that issue was concerned with questions of whether research in AI was making progress, of whether AI was at a turning point, and of its foundations, mathematical, technical, and philosophical-with much disagreement. However, in that volume there was also a recognition, or perhaps a rediscovery, of an alternative path toward AI - the connectionist learning approach and the notion of neural nets-and a burgeoning optimism for this approach's potential. Since the 1960s, the learning approach had been relegated to the fringes in favor of the symbolic formalism for representing the world, our knowledge of it, and how machines can reason about it. Yet no essay captured some of the mood at the time better than Hilary Putnam's "Much Ado About Not Very Much." Putnam questioned the Dædalus issue itself: "Why a whole issue of Dædalus? Why don't we wait until AI achieves something and then have an issue?" He concluded:This volume of Dædalus is indeed the first since 1988 to be devoted to artificial intelligence. This volume does not rehash the same debates; much else has happened since, mostly as a result of the success of the machine learning approach that was being rediscovered and reimagined, as discussed in the 1988 volume. This issue aims to capture where we are in AI's development and how its growing uses impact society. The themes and concerns herein are colored by my own involvement with AI. Besides the television, films, and books that I grew up with, my interest in AI began in earnest in 1989 when, as an undergraduate at the University of Zimbabwe, I undertook a research project to model and train a neural network.9 I went on to do research on AI and robotics at Oxford. Over the years, I have been involved with researchers in academia and labs developing AI systems, studying AI's impact on the economy, tracking AI's progress, and working with others in business, policy, and labor grappling with its opportunities and challenges for society.10The authors of the twenty-five essays in this volume range from AI scientists and technologists at the frontier of many of AI's developments to social scientists at the forefront of analyzing AI's impacts on society. The volume is organized into ten sections. Half of the sections are focused on AI's development, the other half on its intersections with various aspects of society. In addition to the diversity in their topics, expertise, and vantage points, the authors bring a range of views on the possibilities, benefits, and concerns for society. I am grateful to the authors for accepting my invitation to write these essays.Before proceeding further, it may be useful to say what we mean by artificial intelligence. The headlines and increasing pervasiveness of AI and its associated technologies have led to some conflation and confusion about what exactly counts as AI. This has not been helped by the current trend-among researchers in science and the humanities, startups, established companies, and even governments-to associate anything involving not only machine learning, but data science, algorithms, robots, and automation of all sorts with AI. This could simply reflect the hype now associated with AI, but it could also be an acknowledgment of the success of the current wave of AI and its related techniques and their wide-ranging use and usefulness. I think both are true; but it has not always been like this. In the period now referred to as the AI winter, during which progress in AI did not live up to expectations, there was a reticence to associate most of what we now call AI with AI.Two types of definitions are typically given for AI. The first are those that suggest that it is the ability to artificially do what intelligent beings, usually human, can do. For example, artificial intelligence is:The human abilities invoked in such definitions include visual perception, speech recognition, the capacity to reason, solve problems, discover meaning, generalize, and learn from experience. Definitions of this type are considered by some to be limiting in their human-centricity as to what counts as intelligence and in the benchmarks for success they set for the development of AI (more on this later). The second type of definitions try to be free of human-centricity and define an intelligent agent or system, whatever its origin, makeup, or method, as:This type of definition also suggests the pursuit of goals, which could be given to the system, self-generated, or learned.13 That both types of definitions are employed throughout this volume yields insights of its own.These definitional distinctions notwithstanding, the term AI, much to the chagrin of some in the field, has come to be what cognitive and computer scientist Marvin Minsky called a "suitcase word."14 It is packed variously, depending on who you ask, with approaches for achieving intelligence, including those based on logic, probability, information and control theory, neural networks, and various other learning, inference, and planning methods, as well as their instantiations in software, hardware, and, in the case of embodied intelligence, systems that can perceive, move, and manipulate objects.Three questions cut through the discussions in this volume: 1) Where are we in AI's development? 2) What opportunities and challenges does AI pose for society? 3) How much about AI is really about us?Notions of intelligent machines date all the way back to antiquity.15 Philosophers, too, among them Hobbes, Leibnitz, and Descartes, have been dreaming about AI for a long time; Daniel Dennett suggests that Descartes may have even anticipated the Turing Test.16 The idea of computation-based machine intelligence traces to Alan Turing's invention of the universal Turing machine in the 1930s, and to the ideas of several of his contemporaries in the mid-twentieth century. But the birth of artificial intelligence as we know it and the use of the term is generally attributed to the now famed Dartmouth summer workshop of 1956. The workshop was the result of a proposal for a two-month summer project by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon whereby "An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."17In their respective contributions to this volume, "From So Simple a Beginning: Species of Artificial Intelligence" and "If We Succeed," and in different but complementary ways, Nigel Shadbolt and Stuart Russell chart the key ideas and developments in AI, its periods of excitement as well as the aforementioned AI winters. The current AI spring has been underway since the 1990s, with headline-grabbing breakthroughs appearing in rapid succession over the last ten years or so: a period that Jeffrey Dean describes in the title of his essay as a "golden decade," not only for the pace of AI development but also its use in a wide range of sectors of society, as well as areas of scientific research.18 This period is best characterized by the approach to achieve artificial intelligence through learning from experience, and by the success of neural networks, deep learning, and reinforcement learning, together with methods from probability theory, as ways for machines to learn.19A brief history may be useful here: In the 1950s, there were two dominant visions of how to achieve machine intelligence. One vision was to use computers to create a logic and symbolic representation of the world and our knowledge of it and, from there, create systems that could reason about the world, thus exhibiting intelligence akin to the mind. This vision was most espoused by Allen Newell and Hebert Simon, along with Marvin Minsky and others. Closely associated with it was the "heuristic search" approach that supposed intelligence was essentially a problem of exploring a space of possibilities for answers. The second vision was inspired by the brain, rather than the mind, and sought to achieve intelligence by learning. In what became known as the connectionist approach, units called perceptrons were connected in ways inspired by the connection of neurons in the brain. At the time, this approach was most associated with Frank Rosenblatt. While there was initial excitement about both visions, the first came to dominate, and did so for decades, with some successes, including so-called expert systems.Not only did this approach benefit from championing by its advocates and plentiful funding, it came with the suggested weight of a long intellectual tradition-exemplified by Descartes, Boole, Frege, Russell, and Church, among others-that sought to manipulate symbols and to formalize and axiomatize knowledge and reasoning. It was only in the late 1980s that interest began to grow again in the second vision, largely through the work of David Rumelhart, Geoffrey Hinton, James McClelland, and others. The history of these two visions and the associated philosophical ideas are discussed in Hubert Dreyfus and Stuart Dreyfus's 1988 Dædalus essay "Making a Mind Versus Modeling the Brain: Artificial Intelligence Back at a Branchpoint."20 Since then, the approach to intelligence based on learning, the use of statistical methods, back-propagation, and training (supervised and unsupervised) has come to characterize the current dominant approach.Kevin Scott, in his essay "I Do Not Think It Means What You Think It Means: Artificial Intelligence, Cognitive Work & Scale," reminds us of the work of Ray Solomonoff and others linking information and probability theory with the idea of machines that can not only learn, but compress and potentially generalize what they learn, and the emerging realization of this in the systems now being built and those to come. The success of the machine learning approach has benefited from the boon in the availability of data to train the algorithms thanks to the growth in the use of the Internet and other applications and services. In research, the data explosion has been the result of new scientific instruments and observation platforms and data-generating breakthroughs, for example, in astronomy and in genomics. Equally important has been the co-evolution of the software and hardware used, especially chip architectures better suited to the parallel computations involved in data- and compute-intensive neural networks and other machine learning approaches, as Dean discusses.Several authors delve into progress in key subfields of AI.21 In their essay, "Searching for Computer Vision North Stars," Fei-Fei Li and Ranjay Krishna chart developments in machine vision and the creation of standard data sets such as ImageNet that could be used for benchmarking performance. In their respective essays "Human Language Understanding & Reasoning" and "The Curious Case of Commonsense Intelligence," Chris Manning and Yejin Choi discuss different eras and ideas in natural language processing, including the recent emergence of large language models comprising hundreds of billions of parameters and that use transformer architectures and self-supervised learning on vast amounts of data.22 The resulting pretrained models are impressive in their capacity to take natural language prompts for which they have not been trained specifically and generate human-like outputs, not only in natural language, but also images, software code, and more, as Mira Murati discusses and illustrates in "Language & Coding Creativity." Some have started to refer to these large language models as foundational models in that once they are trained, they are adaptable to a wide range of tasks and outputs.23 But despite their unexpected performance, these large language models are still early in their development and have many shortcomings and limitations that are highlighted in this volume and elsewhere, including by some of their developers.24In "The Machines from Our Future," Daniela Rus discusses the progress in robotic systems, including advances in the underlying technologies, as well as in their integrated design that enables them to operate in the physical world. She highlights the limitations in the "industrial" approaches used thus far and suggests new ways of conceptualizing robots that draw on insights from biological systems. In robotics, as in AI more generally, there has always been a tension as to whether to copy or simply draw inspiration from how humans and other biological organisms achieve intelligent behavior. Elsewhere, AI researcher Demis Hassabis and colleagues have explored how neuroscience and AI learn from and inspire each other, although so far more in one than the other, as and have the success of the current approaches to AI, there are still many shortcomings and as well as problems in It is useful to on one such as when AI does not as or or or that can to or when it on or information about the world, or when it has such as of all of which can to a of public shortcomings have captured the attention of the wider public and as well as among there is an on AI and In recent years, there has been a of to principles and approaches to AI, as well as involving and such as the on AI, that to best important has been the of with to and - in the and developing AI in both and as has been well in recent This is an important in its own but also with to the of the resulting AI and, in its intersections with more the other there are limitations and problems associated with the that AI is not capable of if could to more more or more general AI. In their Turing deep learning and Geoffrey took of where deep learning and highlighted its current such as the with In the case of natural language processing, Manning and Choi the challenges in and despite the of large language Elsewhere, and have the notion that large language models do anything learning, or In & of in a and discuss the problems in systems, the as how to reason about other their systems, and well as challenges in both and especially when the include both humans and Elsewhere, and others a useful of the problems in there is a growing among many that we do not have for the of AI systems, especially as they become more capable and the of use although AI and its related techniques are to be powerful tools for research in science, as examples in this volume and recent examples in which AI not only help results but also by design and become what some have AI to science and and to and challenges for the possibility that more powerful AI could to new in science, as well as progress in some of challenges and has long been a key for many at the frontier of AI research to more capable the of each of AI, the of more general problems that to the possibility of more capable AI learning, reasoning, of and and of these and other problems that could to more capable systems the of whether current characterized by deep learning, the of and and more foundational and and reinforcement or whether different approaches are in such as cognitive agent approaches or or based on logic and probability theory, to name a few. whether and what of approaches be the AI is but many the current along with of and learning architectures have to their about the of the current approaches is associated with the of whether artificial general intelligence can be and if how and Artificial general intelligence is in to what is called that AI and for tasks and goals, such as The development of on the other aims for more powerful AI - at as powerful as is generally to problem or and, in some the capacity to and improve as well as set and its own and the of and when will be is a for most that its achievement have and as is often in and such as A through and The to Ex and it is or there is growing among many at the frontier of AI research that we for the possibility of powerful with to and and with humans, its and use, and the possibility that of could and that we these into how we approach the development of of the research and development, and in AI is of the AI and in its what Nigel Shadbolt the of AI. This is given the for useful and applications and the for in sectors of the However, a few have made the development of their the most of these are and each of which has demonstrated results of increasing still a long way from the most discussed impact of AI and automation is on and the future of This is not In in the of the excitement about AI and and concerns about their impact on a on and the was that such technologies were important for growth and and "the that but not Most recent of this including those I have been involved have and that over time, more are than are that it is the and the and the of will the In their essay AI & and John discuss these for work and further, in & the of & to discuss the with to and and as well as the opportunities that are especially in developing In "The Turing The & of Artificial Intelligence," discusses how the use of human benchmarks in the development of AI the of AI that rather than human He that the AI's development will take in this and resulting for will on the for companies, and a that the that more will be than too much from of the and does not far enough into the future and at what AI will be capable The for AI could from of that in the is and labor and ability to are and and until automation has mostly physical and but that AI will be on more cognitive and tasks based on and, if early examples are even tasks are not of the In other are now in the world machines that that learn and that their ability to do these is to a range of problems they can will be with the range to which the human has been This was and Allen Newell in that this time could be different usually two that new labor will in which will by other humans for their own even when machines may be capable of these as well as or even better than The other is that AI will create so much and all without the for human and the of will be to for when that will the that once the first time since his creation will be with his his to use his from how to the which science and interest will have for to live and and However, most researchers that we are not to a future in which the of will and that until then, there are other and that be in the labor now and in the such as and other and how humans work increasingly capable that and John and discuss in this are not the only of the by AI. Russell a of the potentially from artificial general intelligence, once a of or ten But even we to general-purpose AI, the opportunities for companies and, for the and growth as well as from AI and its related technologies are more than to pursuit and by companies and in the development, and use of AI. At the many the is it is generally that is a in AI, as by its growth in AI research, and as highlighted in several will have for companies and given the of such technologies as discussed by and others the may in the way of approaches to AI and (such as whether they are companies or as and have have the to to in AI. The role of AI in intelligence, systems, autonomous even and other of increasingly In &
- Research Article
29
- 10.1016/j.infsof.2021.106701
- Dec 1, 2021
- Information and Software Technology
DeepBackground: Metamorphic testing for Deep-Learning-driven image recognition systems accompanied by Background-Relevance
- Research Article
11
- 10.3390/s20247287
- Dec 18, 2020
- Sensors (Basel, Switzerland)
This paper presents a novel method for integration of industrially-oriented human-robot speech communication and vision-based object recognition. Such integration is necessary to provide context for task-oriented voice commands. Context-based speech communication is easier, the commands are shorter, hence their recognition rate is higher. In recent years, significant research was devoted to integration of speech and gesture recognition. However, little attention was paid to vision-based identification of objects in industrial environment (like workpieces or tools) represented by general terms used in voice commands. There are no reports on any methods facilitating the abovementioned integration. Image and speech recognition systems usually operate on different data structures, describing reality on different levels of abstraction, hence development of context-based voice control systems is a laborious and time-consuming task. The aim of our research was to solve this problem. The core of our method is extension of Voice Command Description (VCD) format describing syntax and semantics of task-oriented commands, as well as its integration with Flexible Editable Contour Templates (FECT) used for classification of contours derived from image recognition systems. To the best of our knowledge, it is the first solution that facilitates development of customized vision-based voice control applications for industrial robots.
- Research Article
1
- 10.1088/1742-6596/1574/1/012097
- Jun 1, 2020
- Journal of Physics: Conference Series
The construction of image recognition system is inseparable from the development of computer network technology and artificial intelligence. Although the previous large-scale integrated circuit technology has made amazing achievements, it still cannot directly perceive the sound, image, text and other information. With artificial intelligence and modern network technology to open up new achievements in this research field, it is particularly important to carry out the research of image recognition system. The bronze culture and art of the Chinese Bronze Age are the crystallization of the wisdom of the ancient Chinese labouring people and a precious heritage our ancestors left us. How to preserve these precious cultural heritages with the means and methods of modern science and technology is a necessary process to further understand the time-honored characteristics of the Chinese nation. This paper will carry out the research from the Angle of modern artificial intelligence network technology integrating art, and strive to depict and preserve the colorful bronze culture systematically and comprehensively. This paper tries to construct a set of bronze cultural image recognition and management system, so that most users can realize the appreciation and management of ancient culture in a modern way through the intervention of artificial intelligence and network technology.
- Conference Article
7
- 10.1109/isctt51595.2020.00007
- Nov 1, 2020
Recently, autonomous driving car is a hot area of research and the image recognition technology is one of the key technologies for autonomous driving cars to drive safely on the road. With the development of times and technology, deep learning has been widely applied in image recognition technology, and it plays an important role in this field. In this paper, the development of image recognition technology in autonomous driving cars and deep learning are introduced. Different models of deep learning networks, such as Deep Neural Network (DNN), Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) are also analyzed. In addition, the theoretical basis of deep learning in image recognition are summarized and the applications of deep learning in image recognition are carried out. Finally, the paper will look into the future development of autonomous vehicles and the future of image recognition technology. It is concluded that, image recognition system, based on deep learning network has been widely used in autonomous driving cars at present, and it has a prospective future for further development.
- Research Article
4
- 10.1016/j.comcom.2023.11.004
- Nov 8, 2023
- Computer Communications
Standardization and technology trends of artificial intelligence for mobile systems
- Book Chapter
5
- 10.4018/979-8-3693-1355-8.ch001
- May 20, 2024
In an era where AI systems are increasingly integrated into critical applications, ensuring their robustness and reliability is of paramount importance. This study embarks on a comprehensive exploration of innovative metrics aimed at benchmarking and ensuring the robustness of AI systems. Through extensive research and experimentation, the authors introduce a set of groundbreaking metrics that demonstrate superior performance across diverse AI applications and scenarios. These metrics challenge existing benchmarks and set a new gold standard for the AI community to aspire towards. Robustness and reliability are cornerstones of trustworthy AI systems. Traditional metrics often fall short in assessing the real-world performance and robustness of AI models. To address this gap, this research team has developed a suite of novel metrics that capture nuanced aspects of AI system behavior. These metrics evaluate not only accuracy but also adaptability, resilience to adversarial attacks, and fairness in decision-making. By doing so, the authors provide a more comprehensive view of an AI system's capabilities. This study's significance lies in its potential to drive the AI community towards higher standards of performance and reliability. By adopting these innovative metrics, researchers, developers, and stakeholders can better assess and compare the robustness of AI systems. This, in turn, will lead to the development of more dependable AI solutions across various domains, including healthcare, finance, autonomous vehicles, and more. This research represents a significant step forward in ensuring the robustness and reliability of AI systems. The introduction of innovative metrics challenges the status quo and sets a new performance standard for AI systems, ultimately contributing to the creation of more trustworthy and dependable AI technologies.
- Research Article
1
- 10.52403/ijshr.20230145
- Dec 1, 2024
- International Journal of Science and Healthcare Research
The swift progress of intelligence (AI) has led to the creation of self- improving algorithms that enable AI systems to improve their own abilities without human input necessary. These independent AI systems are propelling the advancement of technologies in various sectors such as healthcare and self-driving vehicles. Self-improving algorithms are essential, in empowering AI systems to acquire knowledge and refine their operations by examining real time data and adjusting their models with fresh insights. This research explores the workings of self-improving algorithms and delves into their problem-solving abilities and applications across various fields of study. Furthermore, this study discusses the potential of these algorithms and examines the opportunities and challenges posed by their growing autonomy specifically in domains such as ethics, security, and interactions, between humans and AI systems. Through an exploration of uses and the progress, in technology that enables self-improving AI systems this research seeks to offer a comprehensive insight into the impact these systems will have on shaping future intelligent technologies. Keywords: Self-Improving Algorithms, Autonomous AI, Machine Learning, Artificial Intelligence, Intelligent Systems, Continuous Learning, Autonomous Decision-Making, Optimization, Ethics in AI, AI in Healthcare, AI in Autonomous Vehicles, AI Systems, AI Adaptability
- Research Article
- 10.34190/iccws.20.1.3348
- Mar 24, 2025
- International Conference on Cyber Warfare and Security
This paper examines the integration of human factors engineering into Explainable Artificial Intelligence (XAI) to develop AI systems that are both human-centered and technically robust. The increasing use of AI technologies in high-stakes domains, such as healthcare, finance, and emergency response, underscores the urgent need for explainability, trust, and transparency. However, the field of XAI faces critical challenges, including the absence of standardized definitions and evaluation frameworks, which hinder the assessment and effectiveness of explainability techniques. Human factors engineering, an interdisciplinary field focused on optimizing human-system interactions, offers a comprehensive framework to address these challenges. By applying principles such as user-centered design, error management, and system adaptability, human factors engineering ensures AI systems align with human cognitive abilities and behavioral patterns. This alignment enhances usability, fosters trust, and reduces blind reliance on AI by ensuring explanations are clear, actionable, and tailored to diverse user needs. Additionally, human factors engineering emphasizes inclusivity and accessibility, promoting equitable AI systems that serve varied populations effectively. This paper explores the intersection of HFE and XAI, highlighting their complementary roles in bridging algorithmic complexity with actionable understanding. It further investigates how human factors engineering principles address sociotechnical challenges, including fairness, accountability, and inclusivity, in AI deployment. The findings demonstrate that the integration of human factors engineering and XAI advances the creation of AI systems that are not only technologically sophisticated but also ethically aligned and user-focused. This interdisciplinary synergy is a pathway to develop equitable, effective, and trustworthy AI solutions, fostering informed decision-making and enhancing user confidence across diverse applications.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.