From generative AI to the brain: five takeaways
The big strides seen in generative AI are not based on somewhat obscure algorithms, but due to clearly defined generative principles. The resulting concrete implementations have proven themselves in large numbers of applications. We suggest that it is imperative to thoroughly investigate which of these generative principles may be operative also in the brain, and hence relevant for cognitive neuroscience. In addition, ML research led to a range of interesting characterizations of neural information processing systems. We discuss five examples, the shortcomings of world modeling, the generation of thought processes, attention, neural scaling laws, and quantization, that illustrate how much neuroscience could potentially learn from ML research.
- Research Article
- 10.34190/icair.4.1.3220
- Dec 4, 2024
- International Conference on AI Research
This paper addresses the need for a positive and effective learning environment for engineering students in non-computer science fields to grasp Generative AI principles while navigating the intricate balance between its application and developmental insights. Drawing from pedagogical theories and cognitive science, especially Leinenbach and Corey (2004)’s Universal Design for Learning, this study proposes a framework tailored to the unique needs and backgrounds of engineering students. The framework emphasizes active learning strategies, collaborative problem-solving, and real-world applications to engage learners in meaningful experiences with Generative AI concepts. The central learning context is a M.Sc. program in management engineering with a course/training opportunity in Machine Learning Fundamentals using Python based on Google Collab. The introduction of Generative AI is based on selected Google libraries for Python. Furthermore, this paper explores various instructional approaches and tools to scaffold students' understanding of Generative AI, including hands-on projects, case studies, and interactive simulations. It also addresses ethical considerations and societal implications associated with Generative AI deployment, encouraging students to critically reflect on the broader impacts of their technical decisions. Through a synthesis of pedagogical best practices and AI development principles, this paper contributes to the ongoing discourse on effective AI education for non-computer science disciplines. By embracing a holistic approach that integrates theory with practical application, educators can empower engineering students to harness the transformative potential of Generative AI while navigating its complexities responsibly and ethically.
- Supplementary Content
1
- 10.1007/s12194-025-00968-1
- Jan 1, 2025
- Radiological Physics and Technology
In recent years, generative AI has attracted significant public attention, and its use has been rapidly expanding across a wide range of domains. From creative tasks such as text summarization, idea generation, and source code generation, to the streamlining of medical support tasks like diagnostic report generation and summarization, AI is now deeply involved in many areas. Today’s breadth of AI applications is clearly distinct from what was seen before generative AI gained widespread recognition. Representative generative AI services include DALL·E 3 (OpenAI, California, USA) and Stable Diffusion (Stability AI, London, England, UK) for image generation, ChatGPT (OpenAI, California, USA), and Gemini (Google, California, USA) for text generation. The rise of generative AI has been influenced by advances in deep learning models and the scaling up of data, models, and computational resources based on the Scaling Laws. Moreover, the emergence of foundation models, which are trained on large-scale datasets and possess general-purpose knowledge applicable to various downstream tasks, is creating a new paradigm in AI development. These shifts brought about by generative AI and foundation models also profoundly impact medical image processing, fundamentally changing the framework for AI development in healthcare. This paper provides an overview of diffusion models used in image generation AI and large language models (LLMs) used in text generation AI, and introduces their applications in medical support. This paper also discusses foundation models, which are gaining attention alongside generative AI, including their construction methods and applications in the medical field. Finally, the paper explores how to develop foundation models and high-performance AI for medical support by fully utilizing national data and computational resources.
- Research Article
1
- 10.1111/bjet.13587
- Apr 21, 2025
- British Journal of Educational Technology
This study explores the role of generative AI (GenAI) in providing formative feedback in children's digital learning experiences, specifically in the context of mathematics education. Using multimodal data, the research compares AI‐generated feedback with feedback from human instructors, focusing on its impact on children's learning outcomes. Children engaged with a digital body‐scale number line to learn addition and subtraction of positive and negative integers through embodied interaction. The study followed a between‐group design, with one group receiving feedback from a human instructor and the other from GenAI. Eye‐tracking data and system logs were used to evaluate student's information processing behaviour and cognitive load. The results revealed that while task‐based performance did not differ significantly between conditions, the GenAI feedback condition demonstrated lower cognitive load and students show different visual information processing strategies among the two conditions. The findings provide empirical support for the potential of GenAI to complement traditional teaching by providing structured and adaptive feedback that supports efficient learning. The study underscores the importance of hybrid intelligence approaches that integrate human and AI feedback to enhance learning through synergistic feedback. This research offers valuable insights for educators, developers and researchers aiming to design hybrid AI‐human educational environments that promote effective learning outcomes. Practitioner notesWhat is already known about this topic? Embodied learning approaches have been shown to facilitate deeper cognitive processing by engaging students physically with learning materials, which is especially beneficial in abstract subjects like mathematics. GenAI has the potential to enhance educational experiences through personalized feedback, making it crucial for fostering student understanding and engagement. Previous research indicates that hybrid intelligence that combines AI with human instructors can contribute to improved educational outcomes. What this paper adds? This study empirically examines the effectiveness of GenAI‐generated feedback when compared to human instructor feedback in the context of a multisensory environment (MSE) for math learning. Findings from system logs and eye‐tracking analysis reveal that GenAI feedback can support learning effectively, particularly in helping students manage their cognitive load. The research uncovers that GenAI and teacher feedback lead to different information processing strategies. These findings provide actionable insights into how feedback modality influences cognitive engagement. Implications for practice and/or policy The integration of GenAI into educational settings presents an opportunity to enhance traditional teaching methods, enabling an adaptive learning environment that leverages the strengths of both AI and human feedback. Future educational practices should explore hybrid models that incorporate both AI and human feedback to create inclusive and effective learning experiences, adapting to the diverse needs of learners. Policymakers should establish guidelines and frameworks to facilitate the ethical and equitable adoption of GenAI technologies for learning. This includes addressing issues of trust, transparency and accessibility to ensure that GenAI systems are effectively supporting, rather than replacing, human instructors.
- Research Article
19
- 10.1016/j.polgeo.2024.103134
- May 24, 2024
- Political Geography
The computational logics of large language models (LLMs) or generative AI – from the early models of CLIP and BERT to the explosion of text and image generation via ChatGPT and DALL-E − are increasingly penetrating the social and political world. Not merely in the direct sense that generative AI models are being deployed to govern difficult problems, whether decisions on the battlefield or responses to pandemic, but also because generative AI is shaping and delimiting the political parameters of what can be known and actioned in the world. Contra the promise of a generalizable “world model” in computer science, the article addresses how and why generative AI gives rise to a model of the world, and with it a set of political logics and governing rationalities that have profound and enduring effects on how we live today. The article traces the genealogies of generative AI models, how they have come into being, and why some concepts and techniques that animate these models become durable forms of knowledge that actively shape the world, even long after a specific material commercial GPT model has moved on to a new iteration. Though generative AI retains significant traces of former scientific and computational regimes – in statistical practices, probabilistic knowledge, and so on – it is also dislocating epistemological arrangements and opening them to novel ways of perceiving, characterising, classifying, and knowing the world. Four defining aspects of the political logic of generative AI are elaborated: i) generativity as something more than the capacity to generate image or text outputs, so that a generative logic acts upon the world understood as estimates of “underlying distributions” in data; ii) latency as a political logic of compression in which (by contrast with claims to reduction or distortion) the thing that is hidden, unknown or latent becomes surfaced and amenable to being governed; iii) broken and parallelized sequences as the ordering device of the political logic of generative AI, where attention frameworks radically change the possibilities for governing non-linear problems; iv) pre-training and fine-tuning as a computational logic of generative AI that simultaneously shapes a “zero shot politics” oriented towards unencountered data and new tasks. Across each of the four aspects, the article maps the emerging contemporary political logic of generative AI.
- Research Article
6
- 10.4108/eetiot.5637
- Apr 4, 2024
- EAI Endorsed Transactions on Internet of Things
One of the most well-known generative AI models is the Generative Adversarial Network (GAN), which is frequently employed for data generation or augmentation. In this paper a reliable GAN-based CNN deepfake detection method utilizing GAN as an augmentation element is implemented. It aims to give the CNN model a big collection of images so that it can train better with the intrinsic qualities of the images. The major objective of this research is to show how GAN innovations have enhanced and increased the use of generative AI principles, particularly in fake image classification called Deepfakes that poses concerns about misrepresentation and individual privacy. For identifying these fake photos more synthetic images are created using the GAN model that closely resemble the training data. It has been observed that GAN-augmented datasets can improve the robustness and generality of CNN-based detection models, which correctly identify between real and false images by 96.35%.
- Research Article
- 10.32628/cseit2410612455
- Oct 31, 2024
- International Journal of Scientific Research in Computer Science, Engineering and Information Technology
This research paper explores the transformative potential of generative AI in the context of document processing within large financial organizations, with a particular focus on fraud detection. As financial institutions increasingly rely on vast amounts of documentation for operations ranging from customer onboarding to compliance, the inefficiencies and limitations of traditional manual processing methods become glaringly apparent. These legacy systems are not only time-consuming and prone to human error but also struggle with scalability, a critical requirement in today’s fast-paced financial environment. Moreover, manual systems and traditional Optical Character Recognition (OCR) engines often lack the necessary accuracy and contextual understanding to reliably process complex financial documents and detect fraudulent activities. While OCR technology has automated certain aspects of document processing, its inherent limitations in accuracy, particularly in dealing with degraded documents or complex layouts, and its inability to interpret context, significantly impede its effectiveness in high-stakes financial applications. Furthermore, OCR’s limited capability in detecting subtle indicators of fraud leaves financial organizations vulnerable to increasingly sophisticated fraudulent schemes. Generative AI emerges as a revolutionary solution to these challenges by enhancing the accuracy, scalability, and security of document processing systems. Unlike traditional OCR, generative AI models are designed to understand and interpret the context of documents, thereby significantly improving the accuracy of text recognition, even in complex scenarios. These AI models, trained on vast datasets, are capable of processing large volumes of documents in parallel, making them ideally suited for the high-speed, high-volume environments characteristic of financial institutions. Additionally, generative AI incorporates advanced algorithms that enhance fraud detection capabilities by analyzing patterns, detecting anomalies, and cross-referencing data across multiple documents. This approach not only improves the detection of fraudulent activities but also reduces the likelihood of false positives, thereby enhancing the overall reliability of the system. The paper further delves into the practical applications of generative AI in various critical areas within financial organizations. Key applications include Know Your Customer (KYC) compliance, where AI streamlines the processing and verification of customer documents, thereby ensuring both compliance with regulatory requirements and the authenticity of the information provided. In loan processing, generative AI accelerates the analysis of loan applications, providing real-time risk assessments that enable faster decision-making. Additionally, the technology is applied in invoice and payment processing, where it automates and verifies transactions, reducing errors and ensuring the timely execution of financial operations. In the realm of contract analysis, generative AI facilitates the extraction and interpretation of key terms and clauses, enabling more effective contract negotiation and management. Beyond its practical applications, the paper also addresses the continuous learning capabilities of generative AI models, which allow them to evolve and adapt to new data and document types over time. This feature is particularly crucial in the financial sector, where the types of documents and the nature of fraudulent activities are continually changing. The continuous learning aspect of generative AI ensures that the systems remain up-to-date and effective, even as new challenges and document types emerge. The research also highlights the comparative analysis between traditional OCR-based systems and AI-powered systems, demonstrating the superior performance, efficiency, and scalability of the latter. Moreover, the paper discusses the challenges associated with the implementation of generative AI in financial document processing. These include technical challenges such as the integration of AI systems with existing IT infrastructure, as well as regulatory and compliance issues that arise when deploying AI technologies in the highly regulated financial sector. Despite these challenges, the paper argues that the long-term benefits of adopting generative AI, including improved accuracy, enhanced fraud detection, and greater operational efficiency, far outweigh the initial hurdles. The research also considers the future of generative AI in financial document processing, suggesting that as the technology continues to advance, its applications and benefits will expand even further. Future research opportunities are identified, particularly in the areas of improving the efficiency and scalability of AI models, enhancing their ability to handle increasingly complex document types, and developing more sophisticated fraud detection algorithms. The paper concludes with a discussion on the potential long-term impact of generative AI on the financial industry, arguing that it will play a crucial role in shaping the future of financial operations by providing more accurate, scalable, and secure document processing solutions. This paper makes a significant contribution to the existing body of knowledge on the application of AI in financial services, particularly in the area of document processing and fraud detection. By providing a detailed analysis of the challenges faced by financial organizations and demonstrating how generative AI can address these challenges, the research offers valuable insights for both academic researchers and practitioners in the field. The findings presented in this paper have important implications for the future of document processing in financial organizations, suggesting that the adoption of generative AI will be essential for maintaining operational efficiency, accuracy, and security in an increasingly complex and fast-paced financial environment. In summary, this research not only highlights the transformative potential of generative AI in financial document processing but also provides a roadmap for its successful implementation in large financial organizations, with a particular emphasis on enhancing fraud detection capabilities.
- Research Article
- 10.3389/conf.fncom.2012.55.00143
- Jan 1, 2012
- Frontiers in Computational Neuroscience
Encoding and Recall of Natural Image Sequences with Conditionally Restricted Boltzmann Machines
- Research Article
- 10.55632/pwvas.v96i1.1063
- Apr 18, 2024
- Proceedings of the West Virginia Academy of Science
CAMERON VU, Dept of Computer Science and Math & ENGR, Shepherd University, Shepherdstown, WV, 25443, and DARIA PANOVA, Dept of Computer Science and Math & ENGR, Shepherd University, Shepherdstown, WV, 25443, and JOSIAH KOWALSKI, Dept of Computer Science and Math & ENGR, Shepherd University, Shepherdstown, WV, 25443, and Dr. W. LIAO (Faculty Advisor), Dept of Computer Science and Math & ENGR, Shepherd University, Shepherdstown, WV, 25443, and Dr. O. Guzide (Faculty Advisor), Dept of Computer Science and Math & ENGR, Shepherd University, Shepherdstown, WV, 25443. Smart Parking Space Detection with Generative Artificial Intelligence and Large Language Models. The increasing relevance of generative AI and large language models is reshaping various sectors of modern society. These advancements have spurred notable progress in fields such as healthcare, finance, and education. Yet, the application of AI extends beyond expert domains, offering simplified solutions to everyday tasks for the general populace. This project harnesses the power of generative artificial intelligence and large language models to develop a practical application: smart parking space detection. By leveraging these technologies, individuals can effortlessly ascertain the availability of parking spots in monitored lots via camera or photographic monitoring, facilitated by a straightforward algorithm. The overarching objective is twofold: to engineer a user-friendly system utilizing generative AI principles and to demonstrate the potential for such technologies to enhance the daily experiences of ordinary individuals.
- Research Article
- 10.22481/praxisedu.v21i52.17104
- Jul 9, 2025
- Práxis Educacional
Many strategies have been proposed for responding to generative AI (genAI) in higher education since the public launch of ChatGPT, but many challenges remain for teaching, learning, and assessment. This conceptual essay explores tensions arising from the introduction of genAI into higher education, focusing on implications for equity outcomes. These tensions include the need to teach foundational academic skills concurrently with critical AI literacy, assessment redesign challenges, and genAI’s impact on knowledge production. In this articulation of tensions, genAI is conceptualised as an “assemblage” of technologies, sociopolitical and pedagogical contexts, epistemological foundations, and so on. By understanding genAI in this way, this essay argues that there are fundamental aspects to how genAI functions as a technology, along with the particularities of the contexts into which it is introduced, that make it a potential threat to equity outcomes. Countering this potential threat must not be left up to individual educators but will require institutional and sector-wide leadership.
- Research Article
- 10.54691/vhyxgj39
- Jul 13, 2024
- Scientific Journal Of Humanities and Social Sciences
Personal information is the foundation of generative AI, and ChatGPT-like generative AI needs to process a large amount of personal information at various stages such as model training, model generation, and model optimization, which also has a certain impact on traditional personal information protection rules. During the information collection phase, generative AI may fugitive the informed consent rules and infringe on the privacy rights of information subjects. In the information utilization stage, generative AI may impact basic personal information processing rules such as the principle of purpose limitation and the principle of openness and transparency, increasing the risk of personal information leakage. At the information generation stage, generative AI can generate false and discriminatory information. Therefore, in the context of generative AI, personal information protection is faced with the problems of the notification and consent rules being hollowed out, the principle of minimum necessity being voided, and the frequent leakage of personal information. Based on this, it is necessary to promote the transformation of the "personal control center to the risk control center" of the notification and consent rules, promote the risk-based interpretation of the principle of least necessary, and improve the risk-based personal information protection compliance system to solve the problem of personal information protection in the context of generative AI.
- Research Article
1
- 10.54097/7a1sv647
- Jan 17, 2025
- Journal of Education and Educational Research
This study investigates the role of generative artificial intelligence (AIGC), particularly large language models, in enhancing the digital literacy of pre-service teachers. With the rapid growth of AI technologies, integrating generative AI into education has gained significant attention. The research focuses on how varying frequencies of generative AI usage affect pre-service teachers’ skills in information processing, problem-solving, and critical thinking. Using a polynomial regression model, we analyze the relationship between factors such as AI usage frequency, problem-solving time, feedback quality, and digital literacy scores. The results indicate that frequent use of generative AI substantially improves digital literacy, with the high-frequency group achieving higher and more consistent scores compared to the low-frequency group. Personalized feedback and project-based tasks, provided by generative AI, enhance students’ comprehension and application of digital technologies. This research shows that incorporating generative AI into teacher training programs not only supports personalized learning but also fosters essential digital competencies. The findings provide valuable insights for enhancing pre-service teachers' digital literacy and lay a foundation for future educational practices involving AI technologies.
- Research Article
4
- 10.30807/ksms.2024.27.2.003
- Jun 30, 2024
- Korean School Mathematics Society
This study focused on the potential of generative AI in the development of digital literacy and aimed to develop ChatGPT based mathematics teaching and learning materials that can be used in middle and high school mathematics classes. To this end, we extracted “information processing and generation”, “digital problem solving”, and “digital concept formation” as components of digital literacy that can be developed in mathematics classes using generative AI, and we set the teaching and learning phases of “AI utilization,” “AI analysis,” “AI creation,” and “AI critical evaluation” in AI literacy conceptual system and then we specified the framework for developing mathematical teaching and learning materials using Generative AI for cultivating digital literacy. Based on this, we developed mathematics teaching and learning materials for “digital concept formation” and “digital problem solving” that can be used in mathematics classes dealing with trigonometric ratios for acute angles, dot products of vectors, stat- istical problem settings, and the truth and falsity of propositions. In this study, specified the framework for developing materials and teaching and learning materials can provide meaningful implications to researchers and teachers who are interested in using generative AI as a didactical instrument in mathematics classes.
- Research Article
1
- 10.1109/mipr62202.2024.00080
- Aug 7, 2024
- Proceedings. IEEE Conference on Multimedia Information Processing and Retrieval
Information processing and retrieval in literature are critical for advancing scientific research and knowledge discovery. The inherent multimodality and diverse literature formats, including text, tables, and figures, present significant challenges in literature information retrieval. This paper introduces LitAI, a novel approach that employs readily available generative AI tools to enhance multimodal information retrieval from literature documents. By integrating tools such as optical character recognition (OCR) with generative AI services, LitAI facilitates the retrieval of text, tables, and figures from PDF documents. We have developed specific prompts that leverage in-context learning and prompt engineering within Generative AI to achieve precise information extraction. Our empirical evaluations, conducted on datasets from the ecological and biological sciences, demonstrate the superiority of our approach over several established baselines including Tesseract-OCR and GPT-4. The implementation of LitAI is accessible at https://github.com/ResponsibleAILab/LitAI.
- Research Article
- 10.33889/ijmems.2025.10.3.031
- Jun 1, 2025
- International Journal of Mathematical, Engineering and Management Sciences
Globally, enterprises are undergoing significant transformation in line with developments based on industrial revolution by leveraging extensive computing resources, data capture technologies, information processing systems, and advanced data science models that span analytics, optimization, and algorithmic intelligence. The global market's increasing demand, diverse resource supply options, global competition, and environmental protection needs are driving organizations to adopt sustainable strategies. These involve utilizing information technology and analytics more effectively, innovating manufacturing and service support systems, and employing novel problem-solving methods. The awareness of these challenges and opportunities inspired the theme of the joint event: 56th Annual Convention of ORSI (2023-ORSI) and the 10th International Conference on Business Analytics and Intelligence (2023-ICBAI), held at Indian Institute of Science, Bangalore, India, from December 18 to 20, 2023. The Operational Research Society of India (ORSI) Karnataka, the Department of Management Studies, IISc Bangalore, and the Analytics Society of India (ASI), DCAL, IIM Bangalore jointly organized this event. The joint event aimed to establish a premier platform for knowledge sharing among distinguished practitioners, academics, and researchers from industry and academia, focusing on the current applications of Operations Research (OR), Business Analytics (BA), and Business Intelligence (BI). The conference received over 655+ paper submissions, with 455 selected for presentation. 66 papers were deemed particularly interesting, and authors of 13 promising articles were invited to submit extended versions for a special issue of International Journal of Mathematical, Engineering and Management Sciences (IJMEMS). After rigorous peer review, eight papers were accepted for publication in this special issue, addressing common challenges in Operations Research, Business Analytics, and Business Intelligence. This Special Issue of the IJMEMS explores recent developments in Operations Research, Business Analytics, and Business Intelligence. It presents cutting-edge trends and substantial contributions to key areas such as Scheduling problems, Transshipment problems, E-commerce, Nanofluids, Blockchain Technology, Generative AI, Augmented Analytics, Machine Learning, and real-time Anomaly Detection. This Special Issue delves into following eight topics: Unmasking Content Clarity: Advancements in Defining, Measuring and Enhancing Readability: The authors present a novel method using natural language processing and Generative AI to quantitatively evaluate readability and comprehension. This approach surpasses traditional readability indices, offering substantial benefits for content creation and knowledge management in fields like education, business, technical support, and policy platforms. Strategic Insights into Blockchain Adoption in Automotive Supply Chains: A Comparative AHP-TOPSIS and TISM-MICMAC Analysis The authors explore blockchain adoption in the automotive industry using a multidisciplinary approach involving AHP, TOPSIS, TISM, and MICMAC analyses. This study identifies key enablers and their relationships, offering actionable insights and practical recommendations for automotive managers considering blockchain adoption. Avoid Maximum Cost Method for Solving Linear Fractional Transshipment Problem: The authors introduce a mathematical model for the linear fractional transshipment problem (LFTP) and suggest “Avoid Maximum Cost Method” to obtain an initial basic feasible solution for LFTP. This study conducts a comparative analysis with existing methods to demonstrate the efficiency of the proposed approach. Mathematical Study of Dispersion of Nano Biosensors in an Artery with Multiple Stenosis: This work examines nano-biosensors in a diseased artery with multiple stenoses, determining the temperature, velocity of nanofluid, and transport coefficients. The results lay the groundwork for developing nano-biosensors to diagnose, treat, and manage cardiovascular disease. The mathematical model had possible scope for target detection and drug delivery at stenosed sites. Integrating Generative AI in Business Intelligence: A Practical Framework for Enhancing Augmented Analytics: This study offers a practical framework for integrating generative AI (Gen AI) into Business Intelligence (BI). By adopting it, businesses can maximize GenAI and BI's potential, enhancing analytics, operations, and fostering a collaborative, data-driven culture. Data Monetization Through Cross Industry Collaboration in Retail Banking: This paper examines how data sharing between banks and e-commerce platforms, facilitated by data monetization, can improve banking customer experiences. This study proposes a framework using propensity models to identify promising customers and offer personalized products and promotions. Development of Dispatching Rule based Heuristic Algorithms for Real-Time Dynamic Scheduling of Non-identical Parallel Burn-in Ovens with Machine Eligibility Restriction: This study tackles a realistic problem in semiconductor manufacturing by scheduling non-identical parallel Burn-in ovens. The study proposes 25 heuristic algorithms for real-time dynamic scheduling with machine eligibility restrictions. Through empirical and statistical analysis, this study identifies top-performing algorithms. A Hybrid Framework for Real-Time Data Drift and Anomaly Identification Using Hierarchical Temporal Memory and Statistical Tests: This paper introduces a hybrid framework combining Hierarchical Temporal Memory and Sequential Probability Ratio Test for real-time data drift detection and anomaly identification. Retraining and false positives were minimized, outperforming traditional methods in experiments. Challenges and Future Directions 1. Enhanced Decision-Making: Addressing uncertainty and complexity in decision-making processes to improve outcomes. 2. Regulatory Frameworks: Conducting further research to establish flexible yet robust regulatory structures that can effectively adapt to the rapid evolution of blockchain technology. 3. Optimization Algorithms: Developing efficient algorithms to solve linear fractional transshipment problem with multi-objective linear fractional functions. 4. AI in Business Intelligence: Investigating long-term impacts of AI-enabled Business Intelligence tools on data-driven decision-making and organizational performance. 5. Ethical AI Considerations: Addressing ethical concerns such as data privacy, fairness, and biases in AI systems to ensure responsible use. 6. Digital Collaboration Frameworks: Developing frameworks that integrate digital footprints and cross-industry collaboration data to enhance strategic partnerships. 7. Advanced Scheduling Algorithms: Creating advanced meta-heuristic algorithms using efficient dispatching rule-based heuristics for dynamic scheduling of non-identical parallel batch processing machines with eligibility constraints. 8. Hybrid Anomaly Detection: Proposing a hybrid framework that combines multivariate extension of Hierarchical Temporal Memory with multivariate Sequential Probability Ratio Test for enhanced anomaly detection. 9. Advanced Text Generation: Utilizing advanced text generation techniques like prompt engineering and fine-tuning to produce more readable and engaging content. We extend our sincere appreciation to all contributing authors for their significant contributions and anonymous reviewers for their dedication and sincere evaluation of submissions. Their timely and excellent responses have been truly gratifying. Additionally, we would like to express our heartfelt thanks to Professor Mangey Ram, Editor-in-Chief of the International Journal of Mathematical, Engineering and Management Sciences, for his support in accepting this special issue and providing unwavering backing from its inception. Guest Editors
- Peer Review Report
- 10.7554/elife.80667.sa2
- Jan 19, 2023
Article Figures and data Abstract Editor's evaluation Introduction Results Discussion Materials and methods Appendix 1 Data availability References Decision letter Author response Article and author information Metrics Abstract Altruism is critical for cooperation and productivity in human societies but is known to vary strongly across contexts and individuals. The origin of these differences is largely unknown, but may in principle reflect variations in different neurocognitive processes that temporally unfold during altruistic decision making (ranging from initial perceptual processing via value computations to final integrative choice mechanisms). Here, we elucidate the neural origins of individual and contextual differences in altruism by examining altruistic choices in different inequality contexts with computational modeling and electroencephalography (EEG). Our results show that across all contexts and individuals, wealth distribution choices recruit a similar late decision process evident in model-predicted evidence accumulation signals over parietal regions. Contextual and individual differences in behavior related instead to initial processing of stimulus-locked inequality-related value information in centroparietal and centrofrontal sensors, as well as to gamma-band synchronization of these value-related signals with parietal response-locked evidence-accumulation signals. Our findings suggest separable biological bases for individual and contextual differences in altruism that relate to differences in the initial processing of choice-relevant information. Editor's evaluation In this important paper, the authors use a sophisticated combination of computational modeling and EEG to show that variation in generosity produced by changes in context (i.e., disadvantageous vs. advantageous inequality) and variation due to individual differences in concern for others both seem to occur early, during the perceptual or valuation stage of a choice, rather than later on during choice comparison. However, these two sources of variation also appear to operate through distinct mechanisms during this stage of processing, which spurs further questions about the drivers of human prosocial behavior. This paper will be of considerable interest to researchers studying the psychological and neural basis of variation in prosocial behavior. https://doi.org/10.7554/eLife.80667.sa0 Decision letter Reviews on Sciety eLife's review process Introduction Altruism – incurring own costs to benefit others – is fundamental for cooperation and productivity in human societies (de Waal, 2008; Piliavin and Charng, 1990). It not only plays crucial roles in shaping socio-political ideology and welfare (e.g. via tax policies and charity; Bechtel et al., 2018; Offer and Pinker, 2017) but is also essential for collective management of challenging situations, such as political, financial, and public health crises. While altruism is thought to be a stable behavioral tendency shaped by the evolutionary advantages of the ability to cooperate, it is unclear why this tendency varies so strongly across individuals, contexts, and cultures (Bester and Güth, 1998; Hamilton, 1964a; Hamilton, 1964b; Lebow, 2018; Piliavin and Charng, 1990). Is altruism governed by a set of unitary neuro-cognitive mechanisms that are engaged to varying degrees in different situations or different people (Tricomi et al., 2010)? Or are there fundamentally different types of altruistic actions that are guided by different neuro-cognitive processes triggered by different contexts (Hein et al., 2016)? From a neurobiological perspective, both these possibilities appear plausible. On the one hand, all altruistic actions necessitate the ability to override self-interest, a parsimonious brain mechanism (Bester and Güth, 1998) that is thought to be facilitated more or less by different contexts and that could be expressed to different degrees in different people (Morishima et al., 2012; Trivers, 1971). On the other hand, empirical observations suggest that altruism varies with a range of factors such as others' previous actions (e.g. empathy-based vs. reciprocity-based altruism) or their perceived similarity (e.g. social distance; Hein et al., 2016; Vekaria et al., 2017). It is thus often argued that in different contexts or different individuals, superficially similar altruistic actions can be guided by distinct motives (such as personal moral norms, responsibility, or empathy), which may be controlled by fundamentally different types of neurocognitive mechanisms (Hein et al., 2016; Piliavin and Charng, 1990; Zaki and Mitchell, 2011). One specific context factor that is often discussed in this context is the inequality in resources held by the actor and the recipient of a possible distribution: People are more willing to share if they possess more than the recipient (advantageous inequality, ADV) than if they possess less (disadvantageous inequality; DIS) (Charness and Rabin, 2002; Fehr and Schmidt, 1999; Gao et al., 2018; Güroğlu et al., 2014; Morishima et al., 2012; Tricomi et al., 2010). Although this consistent effect has been formalized with the same utility model across contexts, this model needs to comprise two distinct latent parameters quantifying altruism in the two contexts (i.e. decision weights on others' payoffs that are specific for ADV and DIS), and these are often uncorrelated and differ strongly from each other (Gao et al., 2018; Morishima et al., 2012). These observations, together with distinct psychological accounts for the distribution behaviors in different contexts (i.e. 'guilt' in the advantageous and 'envy' in the disadvantageous inequality context), imply that altruistic choices in the two contexts may be driven by fundamentally different psychological processes (Fehr and Schmidt, 1999; Gao et al., 2018). Moreover, modeling studies often reveal that these altruism parameters vary strongly between different people for the same choice set (Fehr and Schmidt, 1999), and neuroimaging studies have shown that while distributional behavior in both contexts correlates with activity in brain regions commonly associated with motivation (e.g. the putamen and orbitofrontal cortex), either context also leads to activity in a set of distinct areas (the dorsolateral and dorsomedial prefrontal cortex in advantageous and the amygdala and anterior cingulate cortex in disadvantageous inequality; Gao et al., 2018; Yu et al., 2014). Finally, neuroanatomical research shows that only for advantageous inequality, individual variations in altruistic preferences relate to gray matter volume in the temporoparietal junction (TPJ; Morishima et al., 2012). While these behavioral modeling and neural findings suggest clear contextual and individual differences in altruism, it is still unclear what specific neurocognitive mechanisms these differences could arise from. Previous research on individual and contextual differences in altruism has largely used unitary computational models focusing exclusively on valuation (rather than attempting to separate distinct aspects of the choice process), and has used functional magnetic resonance imaging (fMRI) to identify spatial patterns of neural activity that correlate with valuation processes during wealth distribution behaviors in different contexts (Charness and Rabin, 2002; Fehr and Schmidt, 1999; Gao et al., 2018; Güroğlu et al., 2014; Morishima et al., 2012; Tricomi et al., 2010). For example, recent studies combined computational modeling with fMRI techniques to show that the value of altruistic choice can be modeled as the weighted sum of self- and other-interest, and that different attributes are integrated into an overall value signal correlating with BOLD activity in the ventromedial prefrontal cortex (vmPFC) (Crockett et al., 2017; Crockett et al., 2013; Hutcherson et al., 2015; Hare et al., 2010). However, since these studies neither formally examined the difference in altruistic choices between advantageous and disadvantageous inequality contexts, nor focused on separating different aspects of the decision mechanisms of altruistic choice, they can hardly address the question of whether and how different mechanisms are involved in different types of altruistic actions in different contexts (Crockett et al., 2013; Crockett et al., 2008; Gao et al., 2018). To systematically investigate this issue, it would be beneficial to harness the fact that altruistic decisions – like all choices – are guided by processes unfolding at different temporal stages (Seo and Lee, 2012; Shin et al., 2021; Tump et al., 2020). These processes include (1) initial perception of the objective information related to wealth distribution (e.g. payoff numbers) (Nieder, 2016; Pinel et al., 2004), (2) biased representations of the subjectively decision-relevant information attributes, such as attention-guided weighing of self- vs other-payoffs (Chen and Krajbich, 2018; Teoh et al., 2020), (3) integration of all these attributes and subjective preferences into decision values (Collins and Frank, 2018; Harris et al., 2018; Hutcherson et al., 2015), and (4) final decision processes that transform the decision values into motor responses (O'Connell et al., 2012; Polanía et al., 2014). Taking into account this temporal unfolding of the neurocognitive processes further refines the questions about the origins of differences in altruistic behavior: Do altruistic choices involve different sets of computations throughout all the temporally different processing stages (i.e., initial perceptual processing, valuation, final integrative choice mechanisms) in these different contexts and by different individuals (as suggested by Gao et al., 2018; Tricomi et al., 2010)? Or do individuals mainly perceive and attend to the choice-relevant information differently, before passing on this information to valuation and integrative decision mechanisms devoted to all types of altruistic choices (as suggested by Yu et al., 2014)? Answering these questions by means of modelling and neural recording techniques that allow a detailed focus on different temporal stages of altruistic choice processes could help us understand the biological origins of altruism, reveal why people differ strongly in altruistic behavior, and develop more efficient strategies to facilitate altruism. In the current study, we take such an approach. We combined a modified dictator game that independently varies payoffs to a player versus another person, and thereby also the inequality between both players, with electroencephalography (EEG) and sequential sampling modeling (SSM). This allowed us to identify electrophysiological markers of the initial perceptual processing and biased representation of the decision-relevant information (i.e. stimulus-locked event-related potentials [ERPs] related to the payoffs and the inequality context) as well as of the processes integrating this information into a decision variable used to guide choice (i.e. response-locked evidence accumulation [EA] signals; Balsdon et al., 2021; Hutcherson et al., 2015; Krajbich et al., 2015; Nassar et al., 2019). Thus, our approach differs from that of fMRI studies identifying brain areas involved in the valuation of own and others' payoffs (Fehr and Schmidt, 1999; Morishima et al., 2012; Sáez et al., 2015), since the temporal resolution of fMRI measures makes it difficult to separate response-locked decision-making processes from stimulus-locked perceptual processes and to examine the independent dynamics of these processes during distribution decisions. Our approach is also motivated by studies of nonsocial decisions showing that SSMs may provide a useful framework for investigating the temporal dynamics of the processes that integrate different choice attributes into the decision outcome (Harris et al., 2018; Maier et al., 2020). Many studies have shown that SSMs can identify these processes not just computationally, but also at the neural level, for both the perceptual (Brunton et al., 2013; Kelly and O'Connell, 2013; Ossmy et al., 2013) and value-based decision making (Glaze et al., 2015; Hutcherson et al., 2015; Pisauro et al., 2017; Polanía et al., 2014). The SSM framework provides a formal way to predict the temporal dynamics of processes that integrate evidence for one choice option over another for the temporal period leading up until choice, and to separate these from initial perceptual processes time-locked to stimulus presentation. Neural signals corresponding to these predicted evidence-accumulation signals have been identified with EEG for perceptual decision making across different sensory modalities or stimulus features (Kelly and O'Connell, 2013; O'Connell et al., 2012; Wyart et al., 2012) as well as for value-based decision making (Pisauro et al., 2017; Polanía et al., 2014). These studies have identified evidence accumulation processes either as the model-free build-up rate of the centroparietal positivity (CPP) (Kelly and O'Connell, 2013; Loughnane et al., 2018; Loughnane et al., 2016; O'Connell et al., 2012) or in SSM-prediction-based neural signals measured over parietal and/or frontal regions (Pisauro et al., 2017; Polanía et al., 2014). Both types of neural signals are commonly interpreted as reflecting integration of the choice-relevant evidence to reach a decision, rather than basic motor planning which is usually identified by a fundamentally different neural signal, the contralateral action readiness potential (Kornhuber and Deecke, 2016; Schurger et al., 2021). The cortical origins of these signals may in principle correspond to locations identified by fMRI studies of corresponding SSM-predicted evidence accumulation traces, but note that these studies were not able to study the temporal dynamics of such signals and to unambiguously separate them into stimulus-locked perceptual versus response-locked decision processes (Gluth et al., 2012; Hare et al., 2011; Hutcherson et al., 2015; Rodriguez et al., 2015). Studies using this approach to investigate different types of decisions have identified different cortical areas that implement evidence-accumulation signals in different choice contexts (e.g. parietal regions specifically for perceptual decision making vs. both frontal and parietal regions for value-based decision making Polanía et al., 2014). This shows that different types of decisions may, even if they are reported via the same manual actions, draw on evidence accumulation computations that are instatiated in distinct brain regions. Moreover, altruistic decisions driven by different motives, or made by individuals with different social preferences, have also been found to involve activity in different neural networks (Hein et al., 2016). Therefore, it is necessary to differentiate whether the contextual and individual differences in altruistic decisions reflect recruitment of different brain areas/signals and/or of different computations that are performed within these brain areas. If different final decision mechanisms (i.e. computational and/or neural mechanisms) were to be involved in the two types of altruistic choices, or in different individuals, we should observe response-locked evidence-accumulation signals in different brain areas (e.g. frontal vs. parietal regions), or even different types of computations, in the two types of inequality contexts and/or different individuals. Conversely, if the same final decision mechanism is employed for both types of choice contexts, we should observe similar evidence-accumulation neural signals in similar brain areas, but systematic variations across contexts and/or individuals in those signals (e.g. responses in different brain areas and/or with different temporal characteristics) related to early perceptual/attentional processing of choice-relevant information, such as the available payoff magnitudes (Harris et al., 2018). Here, we apply this approach and use SSMs fitted to individuals' wealth distribution behaviors to predict the underlying neural evidence accumulation dynamics. We then employ these predicted EA signals in our EEG analyses to examine whether a similar neural choice system accumulates the choice-relevant evidence in both inequality contexts, or whether distinct neural systems implement this decision process for the different contexts. Then, we examine whether the different features of each choice problem that ultimately need to be integrated into the choice-relevant evidence – that is, the specific payoffs available to oneself and the other person – are initially processed in a different manner for different contexts and in different individuals. This allows us to directly approach the question of whether contextual and individual differences in altruism arise from differences in the decision mechanisms that integrate and compare choice-relevant information at the final stage of the choice process, or rather from differences in the initial processing and biased representation of the choice-relevant information that is ultimately integrated into the final decision mechanism. Results We recorded 128-channel EEG data from healthy participants playing a modified Dictator Game (DG). On each trial of this task, participants played as proposers and chose between two possible allocations of monetary tokens between themselves and an unknown partner. We systematically varied the allocation options from trial to trial so that in half of the trials, participants received less than their partners for both choice options (disadvantageous context [DIS]) and in the other half they got more than their partners for both options (advantageous context [ADV]). These two types of trials were randomly intermixed and were only defined by the size of the payoffs presented on the screen. On each trial, we presented the two options sequentially, to allow clear identification of time points at which the information associated with each option was processed (Figure 1A, see Materials and methods for details). This sequential presentation allowed us to establish the inequality context with the presentation of the first option, without having to explicitly instruct participants about thetwo contexts. We then studied individuals' sensitivity to self-payoff and other-payoff by focusing on how the choice of the second option depended on the change in these variables from the first to the second option. Importantly, as shown in the payoff schedule of all trials (Figure 1—figure supplement 1), we matched self-/other-payoff differences and the resulting absolute levels of inequality across both contexts and also across the second and the first options (Figure 1—figure supplement 1 middle and right panels). This allowed us to compare choices and response times, model-defined neural choice processes time-locked to the response, and neural processing of different stimulus information (self- and other-payoff) between the two contexts. Figure 1 with 2 supplements see all Download asset Open asset Experimental design and behavioral results. We employed a modified dictator game to measure individuals' wealth distribution behaviors. (A) Example of display in a single trial. In the task, participants played as proposers to allocate a certain amount of monetary tokens between themselves and anonymous partners. At the beginning of each trial, participants were presented with one reference option in blue and were asked to keep their eyes on the central cross for at least 1 s to start the trial, as indicated by the change in font color from blue to green. When the second option was presented, participants had to choose between the two options within 3 s. The selected option was highlighted in blue before the inter-trial interval. Font color assignment to phases (i.e. blue and green to response) was counterbalanced across participants. (B) Payoff information and context affect choice systematically. The generalized linear mixed-effects model shows the effects of multiple predictors on the probability to choose the second option; (C) Payoff information and context affect response times systematically. The linear mixed-effects model shows the effects of multiple predictors on response times (RTs). ΔS, Self-payoff Change; ΔO, Other-payoff Change; CON, Context; C, Constant; •••, p < 0.001; ••, p < 0.01; •, p < 0.05. Error bars indicate 95% confidence interval (CI) of the estimates, N=38. Based on the model fits and their predicted response-locked evidence accumulation EEG traces, we first tested whether similar or different neural processes (i.e. brain regions or physiological markers) underlie the ultimate choice process in the two inequality contexts, in similarity to how this has been studied for other types of decisions (Polanía et al., 2014). Then, we clarified whether neural processing of the stimulus information – which subsequently feeds into the decision processes – differs across contexts and individuals. For this analysis, we examined stimulus-locked event-related potentials (ERPs), in a way that has also been used to differentiate neural processing of decision-relevant features in non-social value-based decision making (e.g. perceptions of health and taste of food items) (Harris et al., 2018). Finally, we explored how individual differences in altruism are related to large-scale information communications between regions associated with these two sets of processes (i.e. response-locked decision processes and stimulus-locked perceptual processes), by examining inter-regional synchronization in the gamma-band frequency (30–90 Hz). This last analysis was motivated by the consideration that evidence accumulation processes need to integrate evidence input from different neural sources (e.g. perceptual processes) (Polanía et al., 2014), and by the proposal that coherent phase-coupling in the gamma band between different groups of neurons may serve as a fundamental process of neural communication for information transmission (Bosman et al., 2014; Fries, 2009; Fries, 2005; Vinck et al., 2013), as already shown for non-social value-based decisions (Polanía et al., 2014; Siegel et al., 2008). Behavior: Altruism depends differentially on self- and other-payoffs across contexts Before performing model-based analyses, we ran model-free linear mixed-effects regressions to establish that the choice-relevant information (i.e. self-payoff, other-payoff, and inequality context [ADV and DIS]) indeed systematically affects individual wealth distribution choices. These analyses confirmed that both self-payoff and other-payoff were important factors underlying individuals' choices. Specifically, participants chose the second option more often when either they or the receiver profited more from this choice (main effect Self-payoff Change (ΔS): beta = 3.77, 95% CI [3.65–3.89], p < 0.001; main effect Other-payoff Change (ΔO): beta = 0.56, 95% CI [0.51–0.61], p < 0.001, ΔS(ΔO): participants' own (partners') payoff change between the second and the first option) (Supplementary file 1, Figure 1B). However, participants were less influenced by changes in their own payoff when they had more money than the other (ADV, interaction Self-payoff Change and beta = 95% CI to p < or when the receiver got payoffs from this choice other-payoff, interaction Self-payoff Change and Other-payoff Change (ΔO): beta = 95% CI p = This effect was when the participants had more money than the receiver interaction Self-payoff Change Other-payoff Change and beta = 95% CI p < 0.001; file 1, Figure 1B). For of these see Appendix 1 and Figure 1—figure supplement for by model-based analyses see Appendix that we also models without interaction effects and/or main but model analyses the model (Supplementary file linear mixed-effects model suggested that the presentation (i.e. first or of options would not affect individuals' choices Appendix 1 and file other-payoff, and context also how participants their decisions. were for absolute values of self-payoff change (main effect Self-payoff Change beta = 95% CI to p < and other-payoff change (main effect of Other-payoff Change beta = 95% CI to p = (Figure both these effects were different for the two inequality contexts, with response times more strongly in the disadvantageous inequality context between Self-payoff Change and beta = 95% CI p < 0.001; interaction between Other-payoff Change and beta = 95% CI p = file Figure These effects are consistent with the central of the SSM framework that evidence will up evidence accumulation and resulting choice, thereby already that an decision process may integrate self- and other-payoff to guide individual decisions of these see Figure 1—figure supplement EEG similar parietal evidence accumulation across contexts To address the question of whether distribution choices are by similar or different neural decision processes across both inequality contexts, we fitted a sequential sampling model to participants' behavioral data and used it to predict neural evidence accumulation signals for the two contexts. Our analyses EA signals over similar parietal regions for both contexts and EA signals that would indicate the use of fundamentally different final choice mechanisms in the different contexts. Specifically, we first fitted the SSM by trials as or choices, on whether the selected the option with more or less distribution of monetary tokens between both For each trial, the model used the subjective value difference between the more option and the more option using the utility see Materials and as input to predict evidence accumulation signals until the when the decision was For we used the choice model which a process et al., (1) (2) with for = for disadvantageous inequality = ADV for advantageous inequality context), s for participants = and for trials = participants' payoff of the option in for s and trial the payoff in the option in for s and trial This model allowed us to parameters that correspond to different aspects of and the choice the decision on others decision and rate to as well as parameters which are less to be to the or neural mechanisms underlying valuation or decision and time Materials and methods for a detailed model these we could examine the effects of context on both basic altruistic (i.e. and the final decision process that the subjective values on from perception and valuation processes (i.e. and Although the payoff of each option was for each trial, participants still had to evidence by and the difference in payoffs between so the decision time may have the evidence accumulation when the decision process the The thus how the decision process the or of evidence see Materials and methods for the of models and the of the model we used for our To the evidence accumulation process, we EA by the fitted model for the context and the payoffs on each trial. that these were of the EA processes underlying choice, since the fitted model could both choices and across the two contexts. For both types of choices and contexts and DIS), the of the data by the model was than (Figure and the was than (Figure right Materials and The model also response that choices are during advantageous inequality overall in ADV vs. 95% CI = = p < and for
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.