• Trending Now
  • Foundational Courses
  • Data Science
  • Practice Problem
  • Machine Learning
  • System Design
  • DevOps Tutorial
  • Web Browser
  • Resolution Completeness and clauses in Artificial Intelligence
  • Stochastic Games in Artificial Intelligence
  • Artificial Intelligence - Boon or Bane
  • Syntactically analysis in Artificial Intelligence
  • Implementation of Particle Swarm Optimization
  • Explaining the language in Natural Language
  • Statistical Machine Translation of Languages in Artificial Intelligence
  • Types of Human Intelligence
  • ACUMOS: A New Innovative Path for AI
  • Optimal Decision Making in Multiplayer Games
  • Propositional Logic based Agent
  • Breadth-first Search is a special case of Uniform-cost search
  • Prepositional Inference in Artificial Intelligence
  • GPT-3 : Next AI Revolution
  • Detecting Frauds with ML and AI
  • Face recognition using Artificial Intelligence
  • Propositional Logic Reduction
  • Prepositional Logic Inferences
  • Types of Environments in AI

Problem Solving in Artificial Intelligence

The reflex agent of AI directly maps states into action. Whenever these agents fail to operate in an environment where the state of mapping is too large and not easily performed by the agent, then the stated problem dissolves and sent to a problem-solving domain which breaks the large stored problem into the smaller storage area and resolves one by one. The final integrated action will be the desired outcomes.

On the basis of the problem and their working domain, different types of problem-solving agent defined and use at an atomic level without any internal state visible with a problem-solving algorithm. The problem-solving agent performs precisely by defining problems and several solutions. So we can say that problem solving is a part of artificial intelligence that encompasses a number of techniques such as a tree, B-tree, heuristic algorithms to solve a problem.  

We can also say that a problem-solving agent is a result-driven agent and always focuses on satisfying the goals.

There are basically three types of problem in artificial intelligence:

1. Ignorable: In which solution steps can be ignored.

2. Recoverable: In which solution steps can be undone.

3. Irrecoverable: Solution steps cannot be undo.

Steps problem-solving in AI: The problem of AI is directly associated with the nature of humans and their activities. So we need a number of finite steps to solve a problem which makes human easy works.

These are the following steps which require to solve a problem :

  • Problem definition: Detailed specification of inputs and acceptable system solutions.
  • Problem analysis: Analyse the problem thoroughly.
  • Knowledge Representation: collect detailed information about the problem and define all possible techniques.
  • Problem-solving: Selection of best techniques.

Components to formulate the associated problem: 

  • Initial State: This state requires an initial state for the problem which starts the AI agent towards a specified goal. In this state new methods also initialize problem domain solving by a specific class.
  • Action: This stage of problem formulation works with function with a specific class taken from the initial state and all possible actions done in this stage.
  • Transition: This stage of problem formulation integrates the actual action done by the previous action stage and collects the final stage to forward it to their next stage.
  • Goal test: This stage determines that the specified goal achieved by the integrated transition model or not, whenever the goal achieves stop the action and forward into the next stage to determines the cost to achieve the goal.  
  • Path costing: This component of problem-solving numerical assigned what will be the cost to achieve the goal. It requires all hardware software and human working cost.

Please Login to comment...

author

  • Artificial Intelligence
  • 10 Best Zoho Vault Alternatives in 2024 (Free)
  • 10 Best Twitch Alternatives in 2024 (Free)
  • Top 10 Best Cartoons of All Time
  • 10 Best AI Instagram Post Generator in 2024
  • 30 OOPs Interview Questions and Answers (2024)

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

AI accelerates problem-solving in complex scenarios

Press contact :.

A stylized Earth has undulating, glowing teal pathways leading everywhere.

Previous image Next image

While Santa Claus may have a magical sleigh and nine plucky reindeer to help him deliver presents, for companies like FedEx, the optimization problem of efficiently routing holiday packages is so complicated that they often employ specialized software to find a solution.

This software, called a mixed-integer linear programming (MILP) solver, splits a massive optimization problem into smaller pieces and uses generic algorithms to try and find the best solution. However, the solver could take hours — or even days — to arrive at a solution.

The process is so onerous that a company often must stop the software partway through, accepting a solution that is not ideal but the best that could be generated in a set amount of time.

Researchers from MIT and ETH Zurich used machine learning to speed things up.

They identified a key intermediate step in MILP solvers that has so many potential solutions it takes an enormous amount of time to unravel, which slows the entire process. The researchers employed a filtering technique to simplify this step, then used machine learning to find the optimal solution for a specific type of problem.

Their data-driven approach enables a company to use its own data to tailor a general-purpose MILP solver to the problem at hand.

This new technique sped up MILP solvers between 30 and 70 percent, without any drop in accuracy. One could use this method to obtain an optimal solution more quickly or, for especially complex problems, a better solution in a tractable amount of time.

This approach could be used wherever MILP solvers are employed, such as by ride-hailing services, electric grid operators, vaccination distributors, or any entity faced with a thorny resource-allocation problem.

“Sometimes, in a field like optimization, it is very common for folks to think of solutions as either purely machine learning or purely classical. I am a firm believer that we want to get the best of both worlds, and this is a really strong instantiation of that hybrid approach,” says senior author Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering (CEE), and a member of a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS).

Wu wrote the paper with co-lead authors Sirui Li, an IDSS graduate student, and Wenbin Ouyang, a CEE graduate student; as well as Max Paulus, a graduate student at ETH Zurich. The research will be presented at the Conference on Neural Information Processing Systems.

Tough to solve

MILP problems have an exponential number of potential solutions. For instance, say a traveling salesperson wants to find the shortest path to visit several cities and then return to their city of origin. If there are many cities which could be visited in any order, the number of potential solutions might be greater than the number of atoms in the universe.  

“These problems are called NP-hard, which means it is very unlikely there is an efficient algorithm to solve them. When the problem is big enough, we can only hope to achieve some suboptimal performance,” Wu explains.

An MILP solver employs an array of techniques and practical tricks that can achieve reasonable solutions in a tractable amount of time.

A typical solver uses a divide-and-conquer approach, first splitting the space of potential solutions into smaller pieces with a technique called branching. Then, the solver employs a technique called cutting to tighten up these smaller pieces so they can be searched faster.

Cutting uses a set of rules that tighten the search space without removing any feasible solutions. These rules are generated by a few dozen algorithms, known as separators, that have been created for different kinds of MILP problems. 

Wu and her team found that the process of identifying the ideal combination of separator algorithms to use is, in itself, a problem with an exponential number of solutions.

“Separator management is a core part of every solver, but this is an underappreciated aspect of the problem space. One of the contributions of this work is identifying the problem of separator management as a machine learning task to begin with,” she says.

Shrinking the solution space

She and her collaborators devised a filtering mechanism that reduces this separator search space from more than 130,000 potential combinations to around 20 options. This filtering mechanism draws on the principle of diminishing marginal returns, which says that the most benefit would come from a small set of algorithms, and adding additional algorithms won’t bring much extra improvement.

Then they use a machine-learning model to pick the best combination of algorithms from among the 20 remaining options.

This model is trained with a dataset specific to the user’s optimization problem, so it learns to choose algorithms that best suit the user’s particular task. Since a company like FedEx has solved routing problems many times before, using real data gleaned from past experience should lead to better solutions than starting from scratch each time.

The model’s iterative learning process, known as contextual bandits, a form of reinforcement learning, involves picking a potential solution, getting feedback on how good it was, and then trying again to find a better solution.

This data-driven approach accelerated MILP solvers between 30 and 70 percent without any drop in accuracy. Moreover, the speedup was similar when they applied it to a simpler, open-source solver and a more powerful, commercial solver.

In the future, Wu and her collaborators want to apply this approach to even more complex MILP problems, where gathering labeled data to train the model could be especially challenging. Perhaps they can train the model on a smaller dataset and then tweak it to tackle a much larger optimization problem, she says. The researchers are also interested in interpreting the learned model to better understand the effectiveness of different separator algorithms.

This research is supported, in part, by Mathworks, the National Science Foundation (NSF), the MIT Amazon Science Hub, and MIT’s Research Support Committee.

Share this news article on:

Related links.

  • Project website
  • Laboratory for Information and Decision Systems
  • Institute for Data, Systems, and Society
  • Department of Civil and Environmental Engineering

Related Topics

  • Computer science and technology
  • Artificial intelligence
  • Laboratory for Information and Decision Systems (LIDS)
  • Civil and environmental engineering
  • National Science Foundation (NSF)

Related Articles

Illustration of a blue car next to a larger-than-life smartphone showing a city map. Both are seen with a city in the background.

Machine learning speeds up vehicle routing

Headshot photo of Cathy Wu, who is standing in front of a bookcase.

Q&A: Cathy Wu on developing algorithms to safely integrate robots into our world

“What this study shows is that rather than shut down nuclear plants, you can operate them in a way that makes room for renewables,” says MIT Energy Initiative researcher Jesse Jenkins. “It shows that flexible nuclear plants can play much better with variable renewables than many people think, which might lead to reevaluations of the role of these two resources together.”

Keeping the balance: How flexible nuclear operation can help add more wind and solar to the grid

Previous item Next item

More MIT News

Two panels show diagonal streaks of green-stained brain blood vessels over a background of blue cells. The green staining is much brighter in the left panel than in the right.

Study: Movement disorder ALS and cognitive disorder FTLD show strong molecular overlaps

Read full story →

A group photo of eight women and one man in two rows, with back row standing and front seated, on a platform with dark curtains behind them.

Students explore career opportunities in semiconductors

Scaffolding sits in front of red brick rowhouses in Amsterdam being renovated

Think globally, rebuild locally

Three students sit in unique wood chairs against a white background

For MIT students, there is much to learn from crafting a chair

About two dozen people walking, biking, and relaxing in a verdant park next to a lake on a sunny summer day

A new way to quantify climate change impacts: “Outdoor days”

A copper mining pit dug deep in the ground, surrounded by grooved rings of rock and flying dust.

Understanding the impacts of mining on local environments and communities

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Forget problem-solving. In the age of AI, it's problem-finding that counts

In the age of AI, the most successful people will be those who can identify the problems that AI is best placed to solve.

In the age of AI, the most successful people will be those who can identify the problems that AI is best placed to solve. Image:  Shutterstock/Baranq

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Ravi Kumar S.

problem solving approaches in artificial intelligence

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Artificial Intelligence is affecting economies, industries and global issues

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, artificial intelligence.

Listen to the article

  • The global conversation around artificial intelligence (AI) has rapidly shifted from optimism to pessimism.
  • But that fear is misplaced — AI tools will always require humans to develop and direct them to where they're most useful.
  • And the most essential human skill is going to shift from problem-solving to problem-finding, which demands cognitive diversity.

The global conversation about artificial intelligence (AI) has come full circle. It has shifted from widespread curiosity ( what can AI do? ) to boundless optimimism ( AI will save the world ) to sweeping pessimism ( AI will destroy the world ).

AI undoubtedly raises a range of serious policy issues that we are only beginning to understand. But the tenor of the current discussion is unreasonably skeptical. AI will take away some of today’s jobs; we know this because every major technology advance has done so. But we also know that AI will create significantly more new jobs and potentially at higher wages.

Rather than stifling humans, this technology will enable us to expand our knowledge, skills and productivity far beyond what most of us previously thought possible.

The employees and companies that will thrive in this new era are those that embrace the technology and accept the inevitable disruption — rather than those reflexively opposing it. This will require a profound change in mindset, one that has little precedent in previous waves of technology.

Have you read?

Navigating ai: what are the top concerns of chief information security officers, can we embrace the ai revolution and journey towards a brighter future, problem-finding is the new premium skill.

The most essential human skill is going to shift from problem-solving to problem-finding.

This contrasts with how workplaces have functioned since the Industrial Revolution. For decades, the emphasis has been on taking an obvious problem and finding an unobvious solution. But AI — when combined with human ingenuity — has unprecedented problem-solving power, so much so that it may free humans to spend more time in creative pursuits. The real challenge of applying AI productively is going to be “use case” discovery: identifying cross-disciplinary urgent problems that are best suited to AI technology.

One good illustration of a surprising application of AI is boosting the productivity of salmon farming — a crucial step towards promoting sustainable aquaculture. Fish farmers are now using AI and machine perception tools (that can take in and process sensory information) to automate feeding time in accordance with the hunger levels of the fish. This reduces wasted feed, trimming a significant carbon emissions source, while improving salmon growth metrics.

A recent collaboration between Tidal AI (a project inside X, Alphabet’s Moonshot Factory) and Cognizant will build on the initial success and expand to other sectors of what’s known as the Blue Economy, including shipping via sea transport. Already, companies can use machine learning models to analyze micro-weather systems, current speeds and port data traffic to optimize shipping route and port arrival times for lower fuel usage.

Why diversity matters to problem-finding

Problem-finding, unlike problem-solving, is going to demand cognitive diversity. To navigate this landscape successfully, businesses will require a more diversely skilled workforce — one that understands human behavior (sociology, psychology, anthropology), can create and optimize different processes (design thinking, six sigma, industry-specific knowledge) and engage audiences intellectually and emotionally through storytelling and design. Liberal arts majors will play as big of a role as STEM graduates. They will help humanize AI and give it more nuanced judgment.

In the face of an increasingly complex and unpredictable world, organizations need to embrace the mantra that “great minds think different — not alike.” Homogeneous cultures tend to stifle cognitive diversity because of the pressure to conform. We can’t tackle 21 st century problems purely through top-down analysis and the application of big data. We need people who can ask great questions, see around corners, think outside the mainstream, understand context, tell us not only what’s happening but why it’s happening and look at the world through their customers’ eyes. That’s why cognitive diversity is so important to maintaining a business’s relevance to its customers and employees.

The prevalence of immigrant founders, researchers and leaders in the US AI industry is a testament to the importance of different perspectives and backgrounds to ensure the country maintains its leadership position as the industry grows. According to one recent study, 28 of 43 (65%) of the top AI companies in the US were founded or co-founded by immigrants.

It is clear that even as generative AI advances towards human-like capabilities, there is no near-term prospect that it will replace human work. Human imagination and ingenuity will be the source of human work indefinitely. People are still going to be essential to solving the vital policy issues raised by AI.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

The Agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} Weekly

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Artificial Intelligence .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

problem solving approaches in artificial intelligence

Technology’s tipping point: Why now is the time to earn trust in AI

Margot Edelman

March 21, 2024

problem solving approaches in artificial intelligence

3 tech pioneers on the biggest AI breakthroughs – and what they expect will come next

problem solving approaches in artificial intelligence

From the world wide web to AI: 11 technology milestones that changed our lives

Stephen Holroyd

March 14, 2024

problem solving approaches in artificial intelligence

Here's how investors are navigating the opportunities and pitfalls of the AI era

Chris Gillam and Judy Wade

problem solving approaches in artificial intelligence

How to navigate the ethical dilemmas posed by the future of digital identity

Matt Price and Anna Schilling

problem solving approaches in artificial intelligence

Nations target AI disinformation ahead of elections, and other digital technology stories you need to know

March 12, 2024

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Artificial Intelligence Technology and Social Problem Solving

Yeunbae kim.

12 Intelligent Information Technology Research Center, Hanyang University, Seoul, Korea

Jaehyuk Cha

13 Department of Computer Science, Hanyang University, Seoul, Korea

Modern societal issues occur in a broad spectrum with very high levels of complexity and challenges, many of which are becoming increasingly difficult to address without the aid of cutting-edge technology. To alleviate these social problems, the Korean government recently announced the implementation of mega-projects to solve low employment, population aging, low birth rate and social safety net problems by utilizing AI and ICBM (IoT, Cloud Computing, Big Data, Mobile) technologies. In this letter, we will present the views on how AI and ICT technologies can be applied to ease or solve social problems by sharing examples of research results from studies of social anxiety, environmental noise, mobility of the disabled, and problems in social safety. We will also describe how all these technologies, big data, methodologies and knowledge can be combined onto an open social informatics platform.

Introduction

A string of breakthroughs in artificial intelligence has placed AI in increasingly visible positions in society, heralding its emergence as a viable, practical, and revolutionary technology. In recent years, we have witnessed IBM’s Watson win first place in the American quiz show Jeopardy! and Google’s AlphaGo beat the Go world champion, and in the very near future, self-driving cars are expected to become a common sight on every street. Such promising developments spur optimism for an exciting future produced by the integration of AI technology and human creativity.

AI technology has grown remarkably over the past decade. Countries around the world have invested heavily in AI technology research and development. Major corporations are also applying AI technology to social problem solving; notably, IBM is actively working on their Science for Social Good initiative. The initiative will build on the success of the company’s noted AI program, Watson, which has helped address healthcare, education, and environmental challenges since its development. One particularly successful project used machine learning models to better understand the spread of the Zika virus. Using complex data, the team developed a predictive model that identified which primate species should be targeted for Zika virus surveillance and management. The results of the project are now leading new testing in the field to help prevent the spread of the disease [ 1 ].

On the other hand, investments in technology are generally mostly used for industrial and service growth, while investments for positive social impact appear to be relatively small and passive. This passive attitude seems to reflect the influence of a given nation’s politics and policies rather than the absence of technology.

For example, in 2017, only 4.2% of the total budget of the Korean government’s R&D of ICT (Information and Communication Technology) was used for social problem solving, but this investment will be increased to 45% within the next five years as the improvement of Korean people’s livelihoods and social problems are selected as important issues by the present government [ 2 ]. In addition, new categories within ICT, including AI, are required as a key means of improving quality of life and achieving population growth in this country.

In this letter, I introduce research on the informatics platform for social problem solving, specifically based on spatio-temporal data, conducted by Hanyang University and cooperating institutions. This research ultimately intends to develop informatics and convergent scientific methodologies that can explain, predict and deal with diverse social problems through a transdisciplinary convergence of social sciences, data science and AI. The research focuses on social problems that involve spatio-temporal information, and applies social scientific approaches and data-analytic methods on a pilot basis to explore basic research issues and the validity of the approaches. Furthermore, (1) open-source informatics using convergent-scientific methodology and models, and (2) the spatio-temporal data sets that are to be acquired in the midst of exploring social problems for potential resolution are developed.

In order to examine the applicability of the models and informatics platform in addressing a variety of social problems in the public as well as in private sectors, the following social problems are identified and chosen:

  • Analysis of individual characteristics with suicidal impulse
  • Study on the mobility of the disabled using GPS data
  • Visualization of the distribution of anxiety using Social Network Services
  • Big data - based analysis of noise environment and exploration of technical and institutional solutions for its improvement
  • Analysis of the response governance regarding the Middle Eastern Respiratory Syndrome (MERS)

The research issues in the above social problems are explored, and the validity of the convergent-scientific methodologies are tested. The feasibility for the potential resolution of the problems are also examined. The relevant data and information are stored in a knowledge base (KB), and at the same time research methods that are used in data extraction, collection, analysis and visualization are also developed. Furthermore, the KB and the method database are merged into an open informatics platform in order to be used in various research projects, business activities, and policy debates.

Pilot Research and Studies on Social Problem Solving

Analysis of individual characteristics with suicidal impulse.

While suicide rates in OECD countries are declining, only South Korea has increasing suicide rates; moreover, Korea currently has the highest suicide rate among OECD countries as shown in Fig.  1 . Its high suicide rate is one of Korea’s biggest social problems, entailing the establishment of effective suicide prevention measures by understanding the causes of suicide. The goals of the research are to: (1) understand suicidal impulse by analyzing the characteristics of members of society according to suicidal impulse experience; (2) predict the likelihood of attempting suicide and analyzing the spatio-temporal quality of life; and (3) to establish a policy to help prevent suicide.

An external file that holds a picture, illustration, etc.
Object name is 481414_1_En_2_Fig1_HTML.jpg

2013Y suicide rate by OECD countries

The Korean Social Survey and Survey of Youth Health Status Data are used for the analysis of suicide risk groups through data mining techniques, using a predictive model based on cell propagation to overcome the limitations of existing statistics methods such as characterization or classification. In the case of the characterization technique, results indicate that there are too many features related to suicide, and that there are variables including many categorical values, making it difficult to identify the variables that affect suicide. On the other hand, the classification technique had difficulties identifying the variables that affect suicide because the number of members attempting suicide was too small.

Correlations between suicide impulses and individual attributes of members of society and the trends of the correlations by year are obtained. The concepts of support , confidence and density are introduced to identify risk groups of suicide attempts, and computational performance problems caused by excessive numbers of risk groups are solved by applying a convex growing method.

The 2014Y social survey including personal and household information of members of the society are used for analysis. The attributes include gender , age , education , marital status , level of satisfaction , disability status , occupation status , housing , and household income .

The high-risk suicide cluster was identified using a small number of convexes. A convex is a set of cells, with one cell being the smallest unit of the cluster for the analysis, and a density is the ratio of the number of non-empty cells to the total number of cells in convex C [ 3 ].

Figure  2 shows that the highest suicidal risk group C1 is composed of members with low income and education level. It was identified that level of satisfaction with life has the highest impact on suicidal impulse, followed in order of impact by disability , marital status , housing , household income , occupation status , gender , age and level of education . The results showed that women and young people tend to have more suicidal impulse.

An external file that holds a picture, illustration, etc.
Object name is 481414_1_En_2_Fig2_HTML.jpg

Suicide risk groups represented by household income and level of education

New prediction models with other machine learning methods and the establishment of mitigation policies are still in development. Subjective analyses of change of well-being, social exclusion, and characteristics of spatio-temporal analysis will also be explored in the future.

Study on the Mobility of the Disabled Using GPS Data

Mobility rights are closely related to quality of life as a part of social rights. Therefore social efforts are needed to guarantee mobility rights to both the physically and mentally disabled. The goal of the study is to suggest a policy for the extension of mobility rights of the disabled. In order to achieve this, travel patterns and socio-demographic characteristics of the physically impaired with low levels of mobility are studied. The study focused on individuals with physical impairments as the initial test group as a means to eventually gain insight into the mobility of the wider disabled population. Conventional studies on mobility measurement obtained data from travel diaries , interviews , and questionnaire surveys . A few studies used geo-location tracking GPS data.

GPS data is collected via mobile device and used to analyze the mobility patterns (distance, speed, frequency of outings) by using regression analysis, and to search for methods to extend mobility. A new metrics for mobility with a new indicator (travel range) was developed, and the way mobility impacts the quality of life of the disabled has been verified [ 4 ].

About 100 people with physical disabilities participated and collected more than 100,000 geo-location data over a month using an open mobile application called traccar . Their trajectories are visualized based on the GPS data as shown in Fig.  3 .

An external file that holds a picture, illustration, etc.
Object name is 481414_1_En_2_Fig3_HTML.jpg

Visualization of trajectory of disabled using geo-location data

The use of location data explained mobility status better than the conventional questionnaire survey method. The questionnaire surveyed mainly the frequency of outings over a certain period and number of complaints about these outings. GPS data enabled researchers to conduct empirical observations on distance and range of travel. It was found that the disabled preferred bus routes that visit diverse locations over the shortest route. Age and monthly income are negatively associated with a disabled individual’s mobility.

Based on the research results, the following has been suggested: (1) development of new bus routes for the disabled and (2) recommendation of a new location for the current welfare center that would enable a greater range of travel. Further study on travel patterns by using indoor positioning technology and CCTV image data will be deployed.

Visualization of the Distribution of Anxiety Using Social Network Services

Many social issues including political polarization, competition in private education, increases in suicide rate, youth unemployment, low birth rate, and hate crime have anxiety as their background. The increase of social anxiety can intensify competition and conflict, which can interfere with social solidarity and cause a decrease in social trust.

Existing social science research mainly focused on grasping public opinion through questionnaires, and ignored the role of emotions. The Internet and social media were used to access emotional traits since they provide a platform not only for the active exchange of information, but also for the sharing and diffusion of emotional responses. If such emotional responses on the internet and geo-locations can be captured in real-time through machine learning, their spatio-temporal distribution could be visualized in order to observe their current status and changes by geographical region.

A visualization system was built to map the regional and temporal distribution of anxiety psychology by combining spatio-temporal information using SNS (Twitter) with sentiment analysis. A Twitter message collecting crawler was also developed to build a dictionary and tweet corpus. Based on these, an automatic classification system of anxiety-related messages was developed for the first time in Korea by applying machine learning to visualize the nationwide distribution of anxiety (See Fig.  4 ) [ 5 ].

An external file that holds a picture, illustration, etc.
Object name is 481414_1_En_2_Fig4_HTML.jpg

Process of Twitter message classification

An average of 5,500 tweets with place_id are collected using Open API Twitter4j . To date, about 820,000 units of data have been collected. A Naïve Bayes Classifier was used for anxiety identification. An accuracy of 84.8% was obtained by using 1,750 and 70,830 anxiety and non-anxiety tweets as training data respectively, and 585 and 23,610 anxiety and non-anxiety tweets as testing data, respectively.

The system indicated the existence of regional disparities in anxiety emotions. It was found that Twitter users who reside in politicized regions have a lower degree of disclosure about their residing areas. This can be interpreted as the act of avoiding situations where the individual and the political position of the region coincide.

As anxiety is not a permanent characteristic of an individual, it can change depending on the time and situation, making it difficult to measure by questionnaire survey at any given time. The Twitter-based system can compensate for the limitations of such a survey method because it can continuously classify accumulated tweet text data and provide a temporal visualization of anxiety distribution at a given time within a desired visual scale (by ward, city, province and nationwide) as shown in Fig.  5 .

An external file that holds a picture, illustration, etc.
Object name is 481414_1_En_2_Fig5_HTML.jpg

Regional distribution of anxiety in Korean society and visualization by geo-scale

Big Data-Based Analysis of Noise Environment and Exploration of Technical and Institutional Solutions for Its Improvement

Environmental issues are a major social concern in our age, and interest has been increasing not only in the consequences of pollution but also in the effects of general environmental aesthetics on quality of life. There is much active effort to improve the visual environment, but not nearly as much interest has been given to improve the auditory environment. Until now, policies on the auditory environment have remained passive countermeasures to simply quantified acoustic qualities (e.g., volume in dB) in specific places such as construction sites, railroads, highways, and residential areas. They lack a comprehensive study of contextual correlations, such as the physical properties of sound, the environmental factors in time and space, and the human emotional response of noise perception.

The goal of this study is to provide a cognitive-based, human-friendly solution to improve noise problems. In order to achieve this, the study aimed to (1) develop a tool for collecting sound data and converting into a sound database, and (2) build spatio-temporal features and a management platform for indoor and outdoor noise sources.

First, pilot experiments were conducted to predict the indicators that measure emotional reactions by developing a handheld device application for data collection.

Three separate free-walking experiments and in-depth interviews were conducted with 78 subjects at international airport lobbies and outdoor environments.

Through the experiment, the behavior patterns of the subjects in various acoustic environments were analyzed, and indicators of emotional reactions were identified. It was determined that the psychological state and the personal environment of the subject are important indicators of the perception of the auditory environment. In order to take into account both the psychological state of the subject and the physical properties of the external sound stimulus, an omnidirectional microphone is used to record the entire acoustic environment.

118 subjects with smartphones with the built-in application walked for an hour in downtown Seoul for data collection. On the app, after entering the prerequisite information, subjects pressed ‘ Good ’ or ‘ Bad ’ whenever they heard a sound that caught their attention. Pressing the button would record the sound for 15 s, and subjects were additionally asked to answer a series of questions about the physical characteristics of the specific location and the characteristics of the auditory environment. During the one-hour experiment, about 600 sound environment reports were accumulated, with one subject reporting the sound characteristics from an average of 5 different places.

Unlike previous studies, the subjects’ paths were not pre-determined, and the position, sound and emotional response of the subject are collected simultaneously. The paths can be displayed to analyze the relations of the soundscapes to the paths (Fig.  6 ).

An external file that holds a picture, illustration, etc.
Object name is 481414_1_En_2_Fig6_HTML.jpg

Subject’s paths and marks for sound types

The study helped to build a positive auditory environment for specific places, to provide policy data for noise regulation and positive auditory environments, to identify the contexts and areas that are alienated from the auditory environment, and to extend the social meaning of “noise” within the study of sound.

Analysis of the Response Governance Regarding the Middle Eastern Respiratory Syndrome (MERS)

The development and spread of new infectious diseases are increasing due to the expansion of international exchange. As can be seen from the MERS outbreak in Korea in 2015, epidemics have profound social and economic impacts. It is imperative to establish an effective shelter and rapid response system (RRS) for infectious diseases control.

The goal of the study is to compare the official response system with the actual response system in order to understand the institutional mechanism of the epidemic response system, and to find effective policy alternatives through the collaboration of policy scholars and data scientists.

Web-based newspaper articles were analyzed to compare the official crisis response system designed to operate in outbreaks to the actual crisis response. An automatic news article crawling tool was developed, and 53,415 MERS-related articles were collected, clustered and stored in the database (Fig.  7 ).

An external file that holds a picture, illustration, etc.
Object name is 481414_1_En_2_Fig7_HTML.jpg

Automatic news article collection & classification system

In order to manage and search for news articles related to MERS from the article database, a curation tool was developed. This tool is able to extract information into triplet graphs (subjects/verbs/objects) from the articles by applying natural language processing techniques. A basic dictionary for the analysis of the infectious disease response system was created based on the extracted triplet information. The information extracted by the curation tool is massive and complex, which limits the ability to correctly understand and interpret information.

A tool for visualizing information at a specific time with a network graph was developed and utilized to facilitate analysis and visualization of the networks (Fig.  8 ). All tools are integrated into a single platform to maximize the efficiency of the process.

An external file that holds a picture, illustration, etc.
Object name is 481414_1_En_2_Fig8_HTML.jpg

Visualization of graph network by specific time

As for the official crisis response manual in case of an infectious disease, social network analysis indicated that while the National Security Bureau (NSB) and Public Health Centers play as large a role as the Center for Disease Control (CDC) in crisis management, the analysis of the news articles showed that the NSB was in fact rarely mentioned. It was found that the CDC and Central Disaster Response Headquarters, the official government organizations that deal with infectious diseases, as well as the Central MERS Management Countermeasures & Support Headquarters, a temporarily established organization, were not playing an important role in response to the MERS outbreak. On the other hand, the Ministry of Health and Welfare, medical institutions, and local governments all have played a central role in responding to MERS. This means that the structure and characteristics of the Command & Control and communication in the official response system seems to have a decisive influence on the cooperative response in a real crisis response. These results provided concrete information on the role of each respondent and the communication system that previous studies based on interviews and surveys have not found.

Much research based on machine learning has been criticized for giving more importance on method itself from the start rather than focusing on data reliability.

This study is based on a KB in which policy researchers manually analyze news articles and prepare basic data by tagging them. This way, it provides a basis for improving the reliability of results when executing text mining work through machine learning.

By using text mining techniques and social network analysis, it is possible to get a comprehensive view of social problems such as the occurrence of infectious diseases by examining the structure and characteristics of the response system from a holistic perspective of the entire system.

With the results of this study, new policies for infectious disease control are suggested in the following directions: (1) Strengthen cooperation networks in early response systems of infectious diseases; (2) Develop new, effective and efficient management plans of cooperative networks; and (3) Create new research to cover other diseases such as avian influenza and SARS [ 6 ].

Convergent Approaches and Open Informatics Platform

An ever-present obstacle in the traditional social sciences when addressing social issues are the difficulties of obtaining evidences from massive data for hypothesis and theory verification. Data science and AI can ease such difficulties and support social science by discovering hidden contexts and new patterns of social phenomena via low-cost analyses of large data. On the other hand, knowledge and patterns derived by machine learning from a large data set with noise often lack validity. Although data-driven inductive methods are effective for finding patterns and correlations, there is a clear limitation to discovering causal relationships.

Social science can help data science and AI by interpreting social phenomena through humanistic literacy and social-scientific thought to verify theoretical validity, and identifying causal relationships through deductive and qualitative approaches. This is why we need convergent-scientific approaches for social problem solving. Convergent approaches offer the new possibility of building an informatics platform that can interpret, predict and solve various social problems through the combination of social science and data science.

In all 5 pilot studies, the convergent-scientific approaches are found valid and sound. Most of the research agendas involved the real-time collection and development of spatio-temporal databases in a real-time manner, and analytic visualization of the results. Such visualization promises new possibilities in data interpretation. The data sets and tools for data collection, analysis and visualization are integrated onto an informatics platform so that they can be used in future research projects and policy debates.

The research was the first transdisciplinary attempt to converge social sciences and data sciences in Korea. This approach will offer a breakthrough in predicting, preventing and addressing future social problems. The research methodology, as a trailblazer, will offer new ground for a research field of a transdisciplinary nature converging data sciences, AI and social sciences. The data, information, knowledge, methodologies, and techniques will all be combined onto an open informatics platform. The platform will be maintained on an open-source basis so that it can be used as a hub for various academic research projects, business activities, and policy debates (See Fig.  9 ). The Open Informatics Platform is planned to be expanded to incorporate citizen sensing, in which people’s observations and views are shared via mobile devices and Internet services in the future [ 7 ].

An external file that holds a picture, illustration, etc.
Object name is 481414_1_En_2_Fig9_HTML.jpg

Structure of informatics platform

Conclusions

In the area of social problem solving, fundamental problems have complex political, social and economic aspects that have their roots in human nature. Both technical and social approaches are essential for tackling social problem solving. In fact, it is the integrated, orchestrated marriage between the two that would bring us closer to effective social problem management.

We need to first study and carefully define the indicators specific to a given social problem or domain. There are many qualitative indicators that cannot be directly and explicitly measured such as social emotions, basic human needs and rights, and life fulfillment [ 8 ].

If the results of machine learning are difficult to measure or include combinations of results that are difficult to define, that particular social problem may not be suitable for machine learning. Therefore, there is a need for new social methods and algorithms that can accurately collect and identify the measurable indicators from opinions of social demanders. Recently, MIT has developed a device to quantitatively measure social signals. The small, lightweight wearable device contains sensors that record the people’s behaviors (physical activity, gestures, and the amount of variation in speech prosody, etc.) [ 9 ].

Machine learning technologies working on already existing data sets are relatively inexpensive compared to conventional million-dollar social programs since machine learning tools can be easily extended. However, they can introduce bias and errors depending on the data content used to train machine learning models or can also be misinterpreted. Human experts are always needed to recognize and correct erroneous outputs and interpretations in order to prevent prejudices [ 10 ].

In the development of AI applications, a great amount of time and resources are required to sort, identify and refine data to provide massive data for training. For instance, machine learning models need to learn millions of photos to recognize specific animals or faces, but human intelligence is able to recognize visual cues by looking at only a few photos. Perhaps it is time to develop a new AI framework which can infer and recognize objects based on small amounts of data, such as Transfer Learning [ 11 ], generate lacking data (GAN), or integrate traditional AI technologies, such as symbolic AI and statistical machine learning into new frameworks.

Machine learning is excellent in predicting, but many social problem solutions do not depend on predictions. The organic ways solutions to specific problems actually unfold according to new policies and programs can be more practical and worth studying than building a cure-all machine learning algorithm. While the evolution of AI is progressing at a stunning rate, there are still challenges to solving social problems. Further research on the integration of social science and AI is required.

A world in which artificial intelligence actually makes policy decisions is still hard to imagine. Considering the current limitations and capabilities of AI, AI should primarily be used as a decision aid.

Acknowledgements

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT 1 ) (No. 2018R1A5A7059549).

1 Ministry of Science and ICT.

Contributor Information

Fernando Koch, Email: ua.ude.bleminu@hcokf .

Atsushi Yoshikawa, Email: pj.ca.hcetit.sid@rab_ihsus_ta .

Shihan Wang, Email: moc.liamg@bb9891ws .

Takao Terano, Email: pj.ro.alalp.mulp@onaret .

Yeunbae Kim, Email: rk.ca.gnaynah@eabnueymik .

Jaehyuk Cha, Email: rk.ca.gnaynah@hjahc .

  • Architecture and Design
  • Asian and Pacific Studies
  • Business and Economics
  • Classical and Ancient Near Eastern Studies
  • Computer Sciences
  • Cultural Studies
  • Engineering
  • General Interest
  • Geosciences
  • Industrial Chemistry
  • Islamic and Middle Eastern Studies
  • Jewish Studies
  • Library and Information Science, Book Studies
  • Life Sciences
  • Linguistics and Semiotics
  • Literary Studies
  • Materials Sciences
  • Mathematics
  • Social Sciences
  • Sports and Recreation
  • Theology and Religion
  • Publish your article
  • The role of authors
  • Promoting your article
  • Abstracting & indexing
  • Publishing Ethics
  • Why publish with De Gruyter
  • How to publish with De Gruyter
  • Our book series
  • Our subject areas
  • Your digital product at De Gruyter
  • Contribute to our reference works
  • Product information
  • Tools & resources
  • Product Information
  • Promotional Materials
  • Orders and Inquiries
  • FAQ for Library Suppliers and Book Sellers
  • Repository Policy
  • Free access policy
  • Open Access agreements
  • Database portals
  • For Authors
  • Customer service
  • People + Culture
  • Journal Management
  • How to join us
  • Working at De Gruyter
  • Mission & Vision
  • De Gruyter Foundation
  • De Gruyter Ebound
  • Our Responsibility
  • Partner publishers

problem solving approaches in artificial intelligence

Your purchase has been completed. Your documents are now available to view.

book: Artificial Intelligence Problems and Their Solutions

Artificial Intelligence Problems and Their Solutions

  • Danny Kopec , Shweta Shetty and Christopher Pileggi

Please login or register with De Gruyter to order this product.

  • Language: English
  • Publisher: Mercury Learning and Information
  • Copyright year: 2014
  • Main content: 300
  • Keywords: Artificial Intelligence
  • Published: April 18, 2014
  • ISBN: 9781683922544

Artificial Intelligence: A Modern Approach, 4th US ed.

  • Speakers & Mentors
  • AI services

problem solving approaches in artificial intelligence

Understanding Problem Decomposition in Artificial Intelligence – Techniques and Applications

Intelligence is the capability of a system to acquire and apply knowledge, to reason and understand, and to adapt and learn from experience. In the field of artificial intelligence, the ability to decompose complex problems into smaller, more manageable parts is a crucial skill for solving challenging tasks. Problem decomposition refers to the process of breaking down a complex problem into its constituent sub-problems in order to solve them individually and then combine the solutions to obtain the desired outcome.

AI problem decomposition techniques involve various strategies and algorithms that enable intelligent systems to analyze and solve complex problems effectively. These techniques rely on the principles of abstraction, hierarchy, and modularity to break down a problem into smaller and more manageable tasks. By decomposing a problem, AI systems can leverage specialized knowledge and algorithms to solve each sub-problem efficiently, resulting in a more efficient and scalable solution for the overall problem.

Problem decomposition finds applications in various domains of artificial intelligence, including natural language processing, computer vision, planning and scheduling, and robotics. In natural language processing, for example, a complex task such as question answering can be decomposed into smaller sub-tasks such as parsing, information retrieval, and answer generation. Similarly, in computer vision, object recognition and scene understanding can be decomposed into tasks such as feature extraction, classification, and object localization.

Overview of Problem Decomposition Techniques

Problem decomposition is a fundamental technique in artificial intelligence that involves breaking down complex problems into smaller, more manageable subproblems. By decomposing a problem, we can tackle each subproblem independently, allowing for more efficient and effective problem-solving approaches.

There are several techniques commonly used for problem decomposition in artificial intelligence. One popular approach is known as divide and conquer, where the problem is divided into smaller parts that can be solved individually. This technique is particularly useful for problems that can be easily divided into independent subproblems.

Another technique is known as hierarchical decomposition, where the problem is divided into a hierarchy of subproblems. Each subproblem is then further decomposed until a solution is reached. This approach is often used when the problem has a natural hierarchical structure.

Parallel decomposition is another common technique, where the problem is decomposed into subproblems that can be solved in parallel. This can lead to significant speedup in problem solving, especially for computationally intensive tasks.

Iterative decomposition is a technique that involves decomposing a problem into smaller subproblems and solving them iteratively. The solution to each subproblem is used to refine the solution to the overall problem. This approach is particularly useful when the solution to the problem cannot be easily obtained in one step.

Overall, problem decomposition techniques play a crucial role in artificial intelligence by allowing us to tackle complex problems effectively. By breaking down a problem into smaller, more manageable subproblems, we can develop more efficient and scalable problem-solving algorithms.

Divide and Conquer Approach in Problem Decomposition

In the field of artificial intelligence, problem decomposition is a crucial technique that helps break down complex problems into simpler and more manageable sub-problems. One popular approach for problem decomposition is the “divide and conquer” approach.

The basic idea behind the divide and conquer approach is to divide a problem into smaller sub-problems, solve each sub-problem independently, and then combine the solutions to obtain the final solution to the original problem. This approach is particularly useful when dealing with large-scale and intricate problems in artificial intelligence.

The divide and conquer approach offers several advantages. Firstly, it allows the problem solver to focus on a specific part of the problem at a time, making it easier to understand and solve. Secondly, it enables parallel processing, where different sub-problems can be solved simultaneously, improving efficiency and reducing the overall computational time.

Furthermore, problem decomposition using the divide and conquer approach promotes modularity and reusability. By breaking down a problem into smaller components, each component can be developed and tested independently. This makes it easier to maintain and update the system in the long run.

However, the divide and conquer approach also comes with its challenges. One main challenge is ensuring that the sub-problems are mutually exclusive and collectively exhaustive. Mutual exclusivity ensures that no two sub-problems overlap, while collective exhaustiveness ensures that all parts of the problem are covered by the sub-problems.

Additionally, finding an optimal way to divide the problem into sub-problems can be a challenging task. The division should be done in such a way that the sub-problems are of approximately equal size and complexity, to ensure efficient processing and avoid bottlenecks in the system.

In conclusion, the divide and conquer approach is a valuable technique in problem decomposition in artificial intelligence. It enables the efficient handling of complex problems by breaking them down into smaller, manageable sub-problems. While it comes with its challenges, with careful planning and consideration, the divide and conquer approach can greatly enhance problem-solving in artificial intelligence.

Hierarchical Decomposition Techniques in Artificial Intelligence

Problem decomposition plays a crucial role in artificial intelligence as it allows complex problems to be broken down into smaller, more manageable sub-problems. Hierarchical decomposition is a popular technique used in problem-solving that involves breaking down a problem into a hierarchy of sub-problems, each of which can be addressed independently.

One of the main advantages of using hierarchical decomposition in artificial intelligence is that it allows for modular design. By dividing a problem into smaller sub-problems, each sub-problem can be solved separately and then integrated back into the overall solution. This modular approach makes it easier to handle complex problems and allows for better scalability and flexibility.

Benefits of Hierarchical Decomposition

Hierarchical decomposition offers several benefits when applied to artificial intelligence problems. First, it allows for a better understanding of the problem by breaking it down into smaller, more manageable pieces. This can lead to more efficient problem-solving strategies and improved decision-making.

Second, hierarchical decomposition helps in reducing complexity. By dividing a problem into smaller sub-problems, each sub-problem can be solved independently, which simplifies the overall problem-solving process. This simplification makes it easier to develop and evaluate algorithms, leading to higher accuracy and performance.

Applying Hierarchical Decomposition

To apply hierarchical decomposition in artificial intelligence, a problem is first analyzed to identify its main components and their relationships. This analysis helps in identifying the high-level tasks and sub-tasks that need to be performed. Once these tasks are identified, they can be further decomposed into smaller, more specific tasks.

A common way to represent the decomposition hierarchy is through the use of a table. The table can show the hierarchy of tasks, with each row representing a specific task and its relationship to other tasks. This visual representation helps in organizing and understanding the problem and facilitates the development of algorithms and decision-making processes.

In conclusion, hierarchical decomposition techniques are an effective approach to tackling complex problems in artificial intelligence. By breaking down a problem into smaller, more manageable sub-problems, hierarchical decomposition allows for modular design, better problem understanding, and reduced complexity. These techniques enable the development of efficient algorithms and decision-making processes, leading to improved performance in artificial intelligence systems.

Parallel Problem Decomposition in Artificial Intelligence Applications

Problem decomposition is a fundamental concept in artificial intelligence, where complex problems are divided into smaller, more manageable sub-problems. This enables efficient problem-solving by distributing the workload across multiple agents or processors. Parallel problem decomposition, in particular, refers to the simultaneous decomposition of a problem into multiple parts that can be processed in parallel. This approach has gained significant attention in the field of artificial intelligence due to its ability to tackle large-scale problems more effectively.

One of the key advantages of parallel problem decomposition is its ability to exploit the power of parallel computing resources. By dividing a problem into smaller parts, multiple processors or agents can work on different sub-problems simultaneously. This significantly reduces the time required to solve the problem, as each sub-problem can be processed independently and concurrently.

Techniques for Parallel Problem Decomposition:

Various techniques have been developed to enable parallel problem decomposition in artificial intelligence applications. One commonly used technique is task decomposition, where different agents or processors are assigned specific tasks within the problem-solving process. Each agent solves its assigned sub-problem, and the results are then combined to obtain the final solution.

Another technique is data decomposition, where the problem data is divided among multiple agents or processors. Each agent processes its assigned data subset and shares the results with other agents. This allows for parallel processing of the problem data, leading to faster and more efficient problem-solving.

Applications of Parallel Problem Decomposition in Artificial Intelligence:

Parallel problem decomposition has found numerous applications in various domains of artificial intelligence. One notable application is in machine learning, where large datasets are often processed. By using parallel problem decomposition techniques, the training of machine learning models can be accelerated, leading to quicker and more accurate predictions.

Parallel problem decomposition is also widely used in optimization problems, such as integer programming and constraint satisfaction problems. By decomposing the problem and solving smaller sub-problems in parallel, better solutions can be obtained in a shorter time frame.

In conclusion, parallel problem decomposition is a powerful technique in artificial intelligence applications. By dividing complex problems into smaller parts and leveraging parallel computing resources, it enables faster and more efficient problem-solving. However, it also presents challenges related to coordination, synchronization, and load balancing. Overall, parallel problem decomposition has significant potential for improving the performance of artificial intelligence applications.

Decomposition-Based Algorithms for Solving Complex Problems

In the field of artificial intelligence, decomposition-based algorithms have proven to be effective in solving complex problems. These algorithms tackle large and complex problem instances by breaking them down into smaller, more manageable subproblems. This approach is particularly useful in domains where the problem space is vast or the problem structure is highly complex.

Decomposition-based algorithms utilize various techniques to decompose a problem into subproblems. One common technique is to divide the problem space based on different dimensions or attributes. By decomposing the problem based on these dimensions, the algorithm can focus on solving smaller subproblems, which are easier to handle.

Another approach used by decomposition-based algorithms is to decompose the problem based on the relationships between different problem components. By identifying these relationships, the algorithm can break down the problem into subproblems that can be solved independently. The solution to each subproblem is then combined to obtain the overall solution to the complex problem.

Benefits of Decomposition-Based Algorithms

The use of decomposition-based algorithms offers several benefits in solving complex problems. Firstly, these algorithms can simplify the problem-solving process by breaking down complex problems into smaller, more manageable subproblems. This enables the algorithm to focus on solving individual subproblems more efficiently.

Additionally, decomposition-based algorithms can improve the scalability of problem-solving approaches. By decomposing a problem, the algorithm can distribute the computational load across multiple processors or machines, enabling parallel processing. This can significantly speed up the solution time for complex problems.

Furthermore, decomposition-based algorithms can facilitate the reuse of problem-solving methods and techniques. Once a problem is decomposed, the algorithm can apply the same solution method to each subproblem, taking advantage of any previously developed problem-solving techniques. This saves time and effort in developing new solution methods for each subproblem.

Applications of Decomposition-Based Algorithms

Decomposition-based algorithms have been successfully applied in various domains of artificial intelligence. In computer vision, these algorithms have been used to solve complex image recognition tasks by decomposing the problem into subtasks such as feature extraction and classification.

In natural language processing, decomposition-based algorithms have been employed to tackle complex language understanding problems by decomposing the problem into subproblems such as semantic analysis and syntax parsing.

Furthermore, in robotics, decomposition-based algorithms have been utilized to solve complex motion planning problems by decomposing the problem into subproblems such as path planning and obstacle avoidance.

In conclusion, decomposition-based algorithms are valuable tools for solving complex problems in artificial intelligence. By breaking down large and complex problems into smaller subproblems, these algorithms can simplify the problem-solving process, improve scalability, and facilitate the reuse of problem-solving techniques. These algorithms have found successful applications in various domains, including computer vision, natural language processing, and robotics.

Agent-based Problem Decomposition in Multi-Agent Systems

Intelligence and problem-solving are intertwined concepts in the realm of artificial intelligence. One key challenge in developing intelligent systems is decomposing complex problems into smaller, more manageable subproblems. This allows for more efficient and effective problem-solving strategies to be implemented.

In the context of multi-agent systems, where multiple autonomous agents collaborate to solve a problem, problem decomposition becomes even more crucial. Agent-based problem decomposition involves dividing a complex problem among agents, each responsible for solving a specific subproblem. This approach harnesses the collective intelligence and problem-solving capabilities of the agents to tackle the problem as a whole.

One common technique for agent-based problem decomposition is task allocation, where agents are assigned specific tasks based on their individual strengths and capabilities. This ensures that each agent contributes optimally to the problem-solving process. Moreover, agents can communicate and share relevant information with each other, allowing for better coordination and collaboration.

Another approach to agent-based problem decomposition is goal decomposition, which involves breaking down the overall problem into subgoals that can be assigned to different agents. Each agent then works towards achieving its assigned subgoal, contributing to the overall problem-solving process. This approach enables agents to focus on specific aspects of the problem, leveraging their specialized knowledge and skills.

Agent-based problem decomposition in multi-agent systems offers several advantages. Firstly, it allows for parallel processing, as multiple agents can work simultaneously on different subproblems. This leads to faster and more efficient problem-solving. Additionally, it increases system robustness and fault tolerance, as agents can continue working even if some agents fail or become unavailable.

In conclusion, agent-based problem decomposition is a powerful technique in the field of artificial intelligence. It enables effective problem-solving in multi-agent systems by dividing complex problems into manageable subproblems, harnessing the intelligence and capabilities of multiple agents. This approach promotes efficient coordination, parallel processing, and fault tolerance, leading to improved problem-solving outcomes.

Problem Decomposition Techniques for Constraint Optimization Problems

Problem decomposition is a fundamental technique in artificial intelligence that involves breaking down a complex problem into smaller, more manageable sub-problems. This approach is particularly useful for constraint optimization problems, where the goal is to find the best solution given a set of constraints.

There are several problem decomposition techniques that can be applied to constraint optimization problems. One commonly used technique is divide-and-conquer, where the problem is split into smaller sub-problems that are solved independently. The solutions to the sub-problems are then combined to obtain the overall solution. This approach is useful when the problem can be easily divided into independent parts.

Another problem decomposition technique is constraint generation, where constraints are added incrementally to the problem until a satisfactory solution is found. This approach is particularly useful when the problem constraints are not known in advance or are difficult to formulate. Constraints can be generated based on the current solution or dynamically updated as the search progresses.

Advantages of Problem Decomposition Techniques

  • Improved scalability: By decomposing a problem into smaller sub-problems, it becomes easier to handle larger problem instances.
  • Efficient search: Problem decomposition allows for parallel and distributed search algorithms, which can greatly speed up the search process.
  • Modularity: Decomposing a problem into sub-problems enhances modularity, making it easier to debug and maintain the solution.

Applications of Problem Decomposition Techniques

  • Resource allocation: Problem decomposition can be used to allocate limited resources among multiple entities, such as scheduling tasks on multiple processors or assigning vehicles to delivery routes.
  • Optimal control: Decomposing a control problem into smaller sub-problems can help in finding optimal control strategies for complex systems.
  • Combinatorial optimization: Problem decomposition techniques are widely used in solving combinatorial optimization problems, such as the traveling salesman problem or the job shop scheduling problem.

In conclusion, problem decomposition techniques are valuable tools in solving constraint optimization problems. They allow for a more efficient and scalable search process, as well as enhanced modularity and flexibility. The choice of problem decomposition technique depends on the specific characteristics of the problem at hand and the desired trade-offs between solution quality and computational complexity.

Case Studies of Problem Decomposition in Artificial Intelligence

In the field of artificial intelligence, problem decomposition techniques have proven to be effective in solving complex problems. By breaking down a problem into smaller sub-problems, intelligent systems are better able to reason and find solutions efficiently.

One interesting case study is the application of problem decomposition in natural language processing. Understanding and generating human language is a challenging task for AI systems. By decomposing language processing into sub-tasks, such as syntactic parsing, semantic analysis, and language generation, intelligent systems can produce more accurate and coherent language outputs.

Another case study is in computer vision, particularly object recognition. Identifying objects in images requires analyzing various image features, such as edges, colors, and textures. By decomposing the object recognition problem into sub-tasks, such as feature extraction, feature matching, and classification, AI systems can achieve high accuracy and efficiency in object recognition.

Problem decomposition has also been applied to robotics. Autonomous robots often face complex tasks, such as navigating through an environment and manipulating objects. By breaking down these tasks into sub-tasks, such as perception, planning, and control, robots can perform tasks more effectively and autonomously.

In the field of machine learning, problem decomposition is widely used in ensemble methods. Instead of addressing a problem as a whole, ensemble methods decompose it into multiple sub-problems and make predictions based on the combination of these sub-solutions. This approach often leads to improved prediction accuracy and robustness.

Overall, these case studies demonstrate the effectiveness of problem decomposition techniques in artificial intelligence. By breaking down complex problems into simpler sub-problems, intelligent systems can achieve better performance, efficiency, and accuracy in various domains.

Evolutionary Approaches to Problem Decomposition

Problem decomposition is a fundamental concept in artificial intelligence and plays a significant role in solving complex problems. It involves breaking down a problem into smaller, more manageable sub-problems that can be solved independently or in parallel, leading to improved efficiency and effectiveness of problem-solving algorithms.

Evolutionary approaches offer a unique perspective on problem decomposition, drawing inspiration from the principles of evolution and natural selection. These approaches leverage the power of genetic algorithms, evolutionary strategies, and evolutionary programming to guide the decomposition process.

Genetic Algorithms

Genetic algorithms are a specific type of evolutionary approach widely used in problem decomposition. They mimic the principles of natural selection and genetic variation to evolve a population of candidate decompositions over multiple generations.

A genetic algorithm typically starts with an initial population of decompositions, represented as chromosomes. Each chromosome encodes a potential solution to the decomposition problem, and the fitness of each chromosome is evaluated based on objective criteria.

Through a process of selection, crossover, and mutation, the genetic algorithm iteratively refines the population, favoring solutions with higher fitness and gradually converging towards an optimal decomposition.

The genetic algorithm framework enables problem decomposition to be explored in a search space, allowing for the generation of diverse and innovative decompositions that may not have been initially apparent.

Evolutionary Strategies and Evolutionary Programming

In addition to genetic algorithms, evolutionary approaches such as evolutionary strategies and evolutionary programming can also be applied to problem decomposition.

Evolutionary strategies focus on direct manipulation of the representation of the decompositions, aiming to optimize specific parameters based on a fitness function. They often include techniques like mutation and recombination to explore new decompositions and improve the solution quality.

Evolutionary programming, on the other hand, emphasizes the iterative adaptation of the decomposition algorithms themselves. It treats the decomposition process as an evolving system, where different heuristics and optimization techniques are progressively integrated or discarded based on their performance.

Overall, evolutionary approaches provide powerful tools for problem decomposition in artificial intelligence. They enable the exploration of novel decompositions, optimization of parameters, and iterative refinement of decomposition algorithms, leading to efficient and effective solutions for complex problems.

Neural Network-based Techniques for Problem Decomposition

Artificial intelligence has made significant strides in problem decomposition, a crucial aspect of solving complex problems. Problem decomposition involves breaking down a large problem into smaller, more manageable sub-problems. Neural network-based techniques have emerged as powerful tools for problem decomposition in artificial intelligence.

Neural networks are structured networks of interconnected artificial neurons, inspired by the biological neural networks in the human brain. These networks can learn complex patterns and relationships from data, enabling them to effectively decompose problems.

1. Multi-task Learning

One technique is multi-task learning, where a neural network is trained to perform multiple related tasks simultaneously. By sharing and transferring knowledge between tasks, the network can decompose a complex problem into a set of simpler tasks. This approach allows the network to learn commonalities and dependencies across the tasks, leading to more effective problem decomposition.

2. Recurrent Neural Networks

Another technique is the use of recurrent neural networks (RNNs). RNNs have a feedback mechanism that allows them to retain information from previous computations. This property makes them well-suited for decomposing sequential problems, where the order of processing is important. By decomposing a sequential problem into a series of sub-problems, an RNN can effectively tackle complex tasks.

Neural network-based techniques for problem decomposition in artificial intelligence offer several advantages. They are capable of learning complex patterns and relationships, enabling them to decompose problems into simpler sub-problems. Additionally, these techniques can handle various types of problems, including multi-task and sequential problems. Overall, neural network-based techniques provide a powerful approach to problem decomposition in artificial intelligence.

Game Theory and Problem Decomposition

Game theory is a branch of artificial intelligence that focuses on decision-making in competitive situations. It is concerned with analyzing the behavior of rational individuals or agents in strategic interactions, where the outcome of their decisions depends not only on their own actions but also on the actions of others.

Problem decomposition, on the other hand, is a technique used in artificial intelligence to break down a complex problem into smaller, more manageable subproblems. This allows for more efficient problem-solving and can help identify optimal solutions.

When applied to game theory, problem decomposition can be used to analyze and solve complex strategic interactions. By breaking down the game into smaller subgames, researchers can focus on analyzing the behavior and decisions of individual agents or groups of agents. This can lead to a better understanding of the underlying dynamics of the game and help identify strategies that can lead to desirable outcomes.

One of the key challenges in game theory and problem decomposition is finding the right level of decomposition. Breaking down the game into too many subgames can result in an explosion in complexity and make the problem difficult to solve. On the other hand, breaking down the game into too few subgames can oversimplify the problem and miss important strategic interactions.

In conclusion, game theory and problem decomposition are two important concepts in artificial intelligence. By applying problem decomposition techniques to game theory, researchers can gain insights into strategic interactions and develop strategies that can lead to desirable outcomes. However, finding the right level of decomposition is crucial and requires careful analysis and consideration.

Swarm Intelligence Techniques for Problem Decomposition

In the field of artificial intelligence, the problem of decomposing complex tasks into smaller, more manageable subproblems is a fundamental challenge. Swarm intelligence techniques have emerged as a promising approach for addressing this problem.

Swarm intelligence is inspired by the collective behavior of social insect colonies, such as ants or bees. It involves the coordination and cooperation of a large number of simple agents to solve complex problems. Each individual agent, referred to as a “particle” or “agent”, is limited in its capabilities and has only partial knowledge of the problem at hand.

Particle Swarm Optimization (PSO)

One popular Swarm intelligence technique for problem decomposition is Particle Swarm Optimization (PSO). PSO is a metaheuristic optimization algorithm that mimics the social behavior of birds flocking or fish schooling. In PSO, a swarm of particles explores the problem search space, continually updating their positions based on their own best-known solution and the best-known solution of their neighbors.

The main advantage of PSO for problem decomposition is its ability to efficiently explore large and complex search spaces. By decomposing the problem into subproblems, each particle can focus on solving a specific subproblem, and the collective behavior of the swarm enables the identification of high-quality solutions.

Ant Colony Optimization (ACO)

Another swarm intelligence technique that has shown promise for problem decomposition is Ant Colony Optimization (ACO). ACO is inspired by the foraging behavior of ants, where ants deposit pheromones to communicate information about food sources. In ACO, artificial ants explore the problem search space, depositing virtual pheromones to mark good solutions.

ACO has been successfully applied to a wide range of optimization problems, such as the traveling salesman problem and vehicle routing problem. By decomposing the problem into subproblems and allowing the artificial ants to explore different paths, ACO can efficiently find high-quality solutions.

In conclusion, swarm intelligence techniques, such as Particle Swarm Optimization and Ant Colony Optimization, offer effective approaches for problem decomposition in artificial intelligence. These techniques leverage the collective behavior of simple agents to efficiently explore complex search spaces and find high-quality solutions. By decomposing problems into subproblems and utilizing the coordinated efforts of the swarm, swarm intelligence techniques enable the efficient tackling of complex tasks.

Planning and Scheduling Techniques using Problem Decomposition

Problem decomposition is a critical aspect of artificial intelligence that enables efficient planning and scheduling techniques. By breaking down complex problems into smaller, more manageable sub-problems, intelligent systems can tackle larger tasks with greater ease and efficiency.

Benefits of Problem Decomposition

Problem decomposition allows artificial intelligence systems to identify and focus on specific aspects of a problem, leading to more effective planning and scheduling. By dividing a problem into smaller components, the system can allocate resources and make decisions more efficiently, resulting in improved performance and accuracy.

Additionally, problem decomposition enables parallelism in the planning and scheduling process. Different components of a problem can be processed simultaneously, leading to faster execution times and increased overall efficiency.

Techniques for Problem Decomposition

There are several techniques available for problem decomposition in artificial intelligence. One commonly used approach is hierarchical decomposition, where a problem is divided into smaller sub-problems that can be solved independently. These sub-problems are then combined to form a solution for the original problem.

Another technique is functional decomposition, where the problem is broken down based on the different functions or tasks it involves. This allows for a more modular approach, where each function can be solved independently and then integrated to form a complete solution.

Other techniques include task decomposition, where the problem is divided into smaller tasks that can be executed independently, and constraint decomposition, where the problem is partitioned based on constraints or limitations that need to be met.

Applications of Planning and Scheduling Techniques

Planning and scheduling techniques using problem decomposition have wide-ranging applications in artificial intelligence. They can be used in autonomous systems, such as self-driving cars, to efficiently plan routes and schedule tasks. In robotics, problem decomposition can help divide complex tasks into smaller, more manageable actions that can be executed by the robot.

These techniques are also utilized in production planning to optimize resource allocation and scheduling. In healthcare, problem decomposition can assist in patient scheduling and resource management. Overall, planning and scheduling techniques using problem decomposition play a crucial role in improving the efficiency and effectiveness of intelligent systems in various domains.

Knowledge Representation and Problem Decomposition

Knowledge representation and problem decomposition are integral components of artificial intelligence systems. Effective knowledge representation allows for the organization, storage, and retrieval of information, while problem decomposition breaks down complex problems into smaller, more manageable subproblems.

Knowledge Representation

Knowledge representation techniques are crucial in artificial intelligence as they facilitate the storage and manipulation of information. Various methods, including logic-based representations, semantic networks, and ontologies, are used to encode data and knowledge in a format that can be processed by AI algorithms.

Logic-based representations, such as first-order logic and propositional logic, enable the representation of facts, rules, and relationships between entities. Semantic networks, on the other hand, represent knowledge through nodes and links, where nodes represent concepts or objects, and links represent relationships between them.

Ontologies, a formal representation of knowledge domains, provide a structured way to describe concepts, properties, and relations in a specific domain. They enable inference and reasoning capabilities, facilitating the development of intelligent systems.

Problem Decomposition

Problem decomposition is the process of breaking down complex problems into smaller, more manageable subproblems. This technique enables AI systems to tackle large-scale problems by dividing them into manageable parts, offering several advantages.

Firstly, problem decomposition allows for parallel processing, where different subproblems can be solved concurrently, improving overall efficiency. Additionally, it enables specialization, where experts can focus on specific subproblems, leveraging their expertise.

Problem decomposition also promotes modularity, as each subproblem can be solved independently and integrated into the overall solution. It provides a hierarchical structure, with high-level problems broken down into increasingly smaller subproblems, allowing for a more granular understanding and analysis of the problem.

Furthermore, problem decomposition facilitates code reuse, as algorithms developed to solve subproblems can be utilized in other contexts. This leads to the development of reusable components and libraries, saving time and resources in the long run.

In conclusion, knowledge representation and problem decomposition play vital roles in artificial intelligence systems. Effective representation allows for the organization and manipulation of information, while problem decomposition enables the efficient handling of complex problems. Together, they contribute to the advancement and development of intelligent systems.

Problem Decomposition in Natural Language Processing

Problem decomposition is a fundamental technique in artificial intelligence that has found extensive applications in various domains, including natural language processing (NLP). NLP involves the interaction between computers and human language, enabling machines to understand, interpret, and generate human language.

In NLP, problem decomposition refers to the process of breaking down complex language tasks into smaller, more manageable subtasks. By decomposing a problem, NLP algorithms can tackle specific linguistic challenges and enhance the overall efficiency and accuracy of natural language processing systems.

One common approach to problem decomposition in NLP is the use of rule-based methods. These methods involve defining grammatical rules and linguistic patterns to handle specific subtasks, such as sentence segmentation, part-of-speech tagging, named entity recognition, and syntactic parsing. By decomposing the overall NLP problem into these subtasks, rule-based methods can address specific language processing challenges effectively.

Another approach to problem decomposition in NLP is through the use of machine learning techniques. Machine learning approaches involve training models on large datasets and using them to make predictions or classify linguistic elements in a given text. These models can be trained to perform various NLP subtasks, such as sentiment analysis, text classification, and machine translation. By decomposing the NLP problem into these subtasks, machine learning techniques can enhance the accuracy and performance of natural language processing systems.

Problem decomposition in NLP is crucial for addressing the complexity of natural language and improving the efficiency and accuracy of language processing systems. By breaking down complex language tasks into smaller subtasks, NLP algorithms can focus on individual language elements and handle them more effectively. This approach allows for more in-depth analysis and understanding of human language, enabling machines to perform sophisticated language processing tasks.

In conclusion, problem decomposition is a vital technique in natural language processing, allowing for the effective handling of complex language tasks. By decomposing the problem into smaller subtasks, NLP algorithms can enhance the accuracy, efficiency, and overall performance of language processing systems, further advancing the field of artificial intelligence.

Problem Decomposition Techniques in Machine Learning

Problem decomposition techniques in machine learning play a vital role in the field of artificial intelligence. These techniques help break down complex problems into smaller, more manageable sub-problems, allowing machine learning algorithms to process and solve them more efficiently.

There are several methods of problem decomposition that are widely used in machine learning. One common technique is called feature decomposition. In this approach, the problem is decomposed based on the features or characteristics of the data. Each feature is treated as an individual problem, and machine learning algorithms are applied to solve each sub-problem independently. The results of these sub-problems are then combined to obtain the final solution.

Another popular technique is called instance decomposition. In this method, the problem is decomposed based on individual instances or examples in the dataset. Each instance is considered as a separate sub-problem, and machine learning algorithms are used to solve each instance independently. The solutions obtained for each instance are then combined to obtain the overall solution for the entire problem.

Problem decomposition techniques in machine learning also include hierarchical decomposition. In this approach, the problem is decomposed into a hierarchy of sub-problems, where each sub-problem is solved in a step-by-step manner. This hierarchical structure allows for the efficient processing and solving of complex problems, as each sub-problem can be addressed at a more granular level.

In conclusion, problem decomposition techniques in machine learning are crucial for effectively solving complex problems in the field of artificial intelligence. These techniques enable the efficient processing and solving of complex problems by breaking them down into smaller, more manageable sub-problems. Feature decomposition, instance decomposition, and hierarchical decomposition are just a few of the commonly used problem decomposition techniques in machine learning.

Genetic Algorithms and Problem Decomposition

In the field of artificial intelligence, problem decomposition is a crucial technique for solving complex problems. By breaking down a problem into smaller, more manageable subproblems, it becomes easier to develop and implement solutions. One approach to problem decomposition is the use of genetic algorithms.

Genetic algorithms are a type of optimization algorithm inspired by the concept of natural selection. They involve the use of a population of candidate solutions, which are represented as strings of bits or other data structures. The genetic algorithm iteratively selects the best candidate solutions and combines them through reproduction, mutation, and other genetic operators to generate new candidate solutions.

Benefits of Genetic Algorithms in Problem Decomposition

Genetic algorithms offer several benefits when it comes to problem decomposition. First, they can handle problems with a large number of variables and constraints, which can be challenging to solve using traditional techniques.

Second, genetic algorithms can exploit the search space efficiently by exploring multiple possible solutions simultaneously. This allows for the identification of promising subproblems, which can be further decomposed and solved independently. By doing so, the overall problem can be tackled in parallel, leading to faster and more efficient solution generation.

Application of Genetic Algorithms in Problem Decomposition

Genetic algorithms have been successfully applied to various problem decomposition tasks in artificial intelligence. For example, they have been used in the decomposition of complex optimization problems, such as combinatorial optimization and scheduling problems.

These examples demonstrate the effectiveness of genetic algorithms in decomposing complex problems and finding high-quality solutions. By leveraging the power of evolution and natural selection, genetic algorithms enable problem decomposition in artificial intelligence to be more efficient and effective.

Problem Decomposition for Resource Allocation Problems

In artificial intelligence, problem decomposition is a technique that is often used to solve complex problems by breaking them down into smaller, more manageable subproblems. This approach can be particularly useful when dealing with resource allocation problems, which involve determining how limited resources should be distributed to different tasks or agents.

Problem decomposition for resource allocation problems typically involves dividing the main problem into smaller subproblems that can be solved independently or in parallel. Each subproblem focuses on a specific aspect of the resource allocation, such as determining the optimal allocation for a single task or agent.

By decomposing the problem, artificial intelligence systems can effectively analyze and optimize resource allocation at a granular level. This allows for more efficient allocation strategies and better utilization of resources. Additionally, decomposing the problem can help in identifying and addressing bottlenecks or inefficiencies in the allocation process.

Several techniques can be used for problem decomposition in resource allocation problems. One common approach is to divide the problem based on the characteristics of the resources or tasks involved. For example, one subproblem may focus on allocating resources to high-priority tasks, while another subproblem may handle the allocation of resources to low-priority tasks.

Another approach is to divide the problem based on the constraints or objectives of the resource allocation. For instance, one subproblem may aim to minimize the total cost of resource allocation, while another subproblem may prioritize fairness or equity in the allocation process.

Overall, problem decomposition is a powerful technique for addressing resource allocation problems in artificial intelligence. By breaking down complex problems into smaller, more manageable subproblems, AI systems can effectively optimize resource allocation and improve overall system performance.

Application of Problem Decomposition in Robotics

Robotics is a field that heavily relies on problem decomposition techniques in order to tackle complex tasks. Problem decomposition involves breaking down a large problem into smaller, more manageable sub-problems. This approach allows robots to efficiently solve complex problems by focusing on smaller components at a time.

One area where problem decomposition is widely used in robotics is in motion planning. Motion planning is the process of determining a sequence of movements for a robot to navigate from one location to another while avoiding obstacles. This can be a challenging problem, especially in dynamic and uncertain environments.

By decomposing the motion planning problem, robots can divide the task into smaller sub-problems, such as obstacle detection, path planning, and collision avoidance. Each sub-problem can then be solved independently, and the solutions can be combined to achieve the overall goal of reaching the desired destination while avoiding obstacles.

Another application of problem decomposition in robotics is in robot perception. Perception refers to the ability of a robot to interpret and understand its environment through sensors and data processing. This is crucial for robots to interact with their surroundings and perform tasks effectively.

Problem decomposition can be applied to robot perception by breaking down the complex task of understanding the environment into smaller sub-problems, such as object recognition, localization, and mapping. By solving these sub-problems independently, robots can build a comprehensive understanding of their surroundings and make informed decisions based on the data they collect.

Furthermore, problem decomposition can also be utilized in robot control. Robot control involves determining the actions and movements that a robot should take to achieve a desired outcome. This can range from simple tasks like picking up an object to more complex tasks like autonomous navigation.

By decomposing the robot control problem, robots can break it down into smaller sub-problems, such as trajectory planning, feedback control, and motor coordination. Each sub-problem can then be addressed separately, allowing the robot to perform precise and coordinated actions to accomplish its goals.

In conclusion, problem decomposition techniques have found widespread application in robotics, enabling robots to tackle complex tasks efficiently and effectively. Whether it’s motion planning, perception, or control, problem decomposition allows robots to break down complex problems into smaller, manageable components and address them individually. This approach increases the overall intelligence and capabilities of robots, making them more versatile and adaptable to various environments and tasks.

Problem Decomposition Techniques for Time Series Analysis

Time series analysis is a crucial task in various fields, such as finance, economics, and weather forecasting. However, analyzing time series data can be challenging due to the inherent complexities and dynamics of temporal relationships. To address these challenges, problem decomposition techniques in artificial intelligence can provide effective solutions.

Problem decomposition involves breaking down a complex problem into smaller, more manageable sub-problems. This approach allows us to tackle each sub-problem independently, which can significantly simplify the analysis of time series data. Here, we discuss some important problem decomposition techniques for time series analysis:

  • Trend decomposition: One common technique is to decompose a time series into its trend, seasonality, and residual components. This decomposition helps in understanding the long-term trend and seasonal patterns in the data, enabling better forecasting and anomaly detection.
  • Frequency-based decomposition: Another approach is to decompose a time series based on its frequency components using techniques like Fourier analysis or wavelet analysis. This decomposition helps in identifying dominant frequency components and their contributions to the overall time series, providing insights into cyclic patterns.
  • Segmentation-based decomposition: This technique involves dividing a time series into meaningful segments based on changes in properties or patterns. Segmentation-based decomposition can help in identifying different regimes or states in the data, allowing separate analysis and modeling for each segment.

These problem decomposition techniques can be further combined or adapted to suit specific time series analysis tasks and domain-specific requirements. By decomposing a time series problem into smaller components, we can gain a deeper understanding of the underlying dynamics and relationships, leading to more accurate forecasting, anomaly detection, and decision-making in various application areas.

Hybrid Approaches to Problem Decomposition

Problem decomposition is a fundamental concept in artificial intelligence, where a complex problem is broken down into smaller, more manageable sub-problems to facilitate problem solving. While various techniques exist for problem decomposition, hybrid approaches that combine multiple methods have proven to be highly effective in addressing complex problems.

Hybrid approaches to problem decomposition leverage the strengths of different decomposition techniques to enhance problem solving. By combining complementary methods, these approaches can provide a more comprehensive and efficient solution to complex problems.

One common hybrid approach is to combine top-down and bottom-up decomposition techniques. Top-down decomposition involves starting with a high-level problem and breaking it down into smaller components, while bottom-up decomposition starts with low-level components and builds up to tackle the overall problem. By combining these two approaches, the hybrid method can capitalize on the advantages of both strategies, enabling a more holistic view of the problem.

Another hybrid approach is to integrate domain-specific knowledge into the decomposition process. By incorporating expert knowledge into the problem decomposition, the hybrid method can optimize the decomposition strategy based on specific characteristics of the problem domain. This integration of intelligence can lead to more efficient decomposition and solution strategies.

Furthermore, hybrid approaches may also involve combining different problem-solving algorithms or heuristics. By selecting and combining the most effective algorithms or heuristics for each sub-problem, the hybrid method can improve the problem-solving efficiency and accuracy. This approach allows for an adaptive and context-aware problem decomposition and solution process.

In conclusion, hybrid approaches to problem decomposition in artificial intelligence leverage the strengths of different techniques, integration of domain-specific knowledge, and combination of problem-solving algorithms to tackle complex problems effectively. These approaches provide a more comprehensive and efficient solution and enhance the intelligence behind the problem decomposition process.

Decision Making and Problem Decomposition

Intelligence in artificial systems heavily relies on the ability to make decisions and solve complex problems. Problem decomposition is an essential technique that helps break down large and challenging problems into smaller, more manageable subproblems.

By decomposing a problem, an artificial intelligence system can better understand the problem space and identify relevant features and dependencies. This process allows the system to efficiently allocate resources and apply appropriate techniques to each subproblem, ultimately leading to more effective problem-solving.

Breaking Down Complexity

Complex problems can often overwhelm an artificial intelligence system, resulting in inefficient resource allocation and suboptimal decision making. Problem decomposition addresses this challenge by dividing the problem into smaller, more manageable components.

Each subproblem can then be analyzed and solved independently, taking into account the relevant context and constraints. This approach enables the system to effectively navigate through the problem space and uncover potential solutions that may have been overlooked in a holistic problem-solving approach.

The Role of Decision Making

Decision making plays a crucial role in problem decomposition by guiding the system in determining how to break down the problem and allocate resources. Strategic decisions need to be made to identify which subproblems are the most critical and require immediate attention.

Moreover, decision making is also essential when selecting appropriate techniques and algorithms to solve each subproblem. Different subproblems may require different computational approaches, and intelligent decision making ensures that suitable methods are applied.

Effective decision making in problem decomposition involves a combination of heuristics, domain knowledge, and learning algorithms. By leveraging these tools, an artificial intelligence system can adapt and improve its decision-making capabilities over time.

In conclusion, decision making and problem decomposition are crucial aspects of artificial intelligence. They enable intelligent systems to effectively handle complex problems by breaking them down into smaller, more manageable subproblems and making informed decisions about resource allocation and problem-solving techniques.

Knowledge Discovery Techniques using Problem Decomposition

Problem decomposition is a crucial aspect of artificial intelligence, as it enables the effective organization and utilization of knowledge. By breaking down complex problems into smaller, more manageable subproblems, researchers and practitioners are able to apply specific techniques for knowledge discovery.

The Benefits of Problem Decomposition

Problem decomposition allows for a structured approach to knowledge discovery, enabling researchers to gain insights into specific components of a problem. By breaking down a complex problem into smaller subproblems, researchers can focus on understanding and solving each subproblem individually. This approach not only simplifies the problem-solving process but also enhances the probability of finding meaningful patterns and relationships within the data.

This process is particularly useful in artificial intelligence, where large and complex datasets are common. By decomposing a problem, researchers can apply various techniques such as data mining, machine learning, and statistical analysis to uncover hidden patterns, relationships, and trends within the data.

Techniques for Knowledge Discovery

Several techniques can be employed for knowledge discovery using problem decomposition in artificial intelligence:

  • Data Mining: Data mining techniques involve extracting useful information from large datasets. By decomposing a problem into smaller subproblems, researchers can apply data mining techniques to each subproblem individually, uncovering patterns and relationships within the data.
  • Machine Learning: Machine learning algorithms can be applied to each subproblem to discover patterns and relationships within the data. By decomposing a problem, researchers can train machine learning models on smaller subsets of the data, leading to more accurate and efficient learning.
  • Statistical Analysis: Statistical analysis techniques can be used to analyze each subproblem individually, revealing patterns, trends, and relationships within the data. By decomposing a problem, researchers can apply statistical analysis techniques to smaller datasets, providing more accurate and interpretable results.

Overall, problem decomposition plays a vital role in knowledge discovery in artificial intelligence. By breaking down complex problems into smaller, more manageable subproblems, researchers can apply various techniques to uncover hidden patterns and relationships within the data. This structured approach enhances the efficiency and accuracy of knowledge discovery processes, leading to meaningful insights and advancements in the field of artificial intelligence.

Questions and answers

What is problem decomposition in artificial intelligence.

Problem decomposition in artificial intelligence is a technique that involves breaking down complex problems into smaller, more manageable subproblems. This allows AI systems to solve problems more efficiently by tackling each subproblem individually and then combining the solutions to obtain the overall solution.

What are some techniques used for problem decomposition in AI?

There are several techniques used for problem decomposition in AI, including divide and conquer, hierarchical decomposition, functional decomposition, and modular decomposition. Each technique has its own advantages and is suitable for different types of problems.

Can you provide an example of problem decomposition in AI?

Sure! Let’s consider the problem of image classification. Instead of trying to build a single AI model that can classify all types of images, problem decomposition can be employed. The problem can be decomposed into multiple subproblems, where each subproblem focuses on classifying a specific type of image (e.g., cats, dogs, cars). This allows for the development of specialized models for each subproblem, leading to better overall classification accuracy.

What are the benefits of using problem decomposition in AI?

Problem decomposition in AI offers several benefits. It allows for parallelization, as different subproblems can be solved concurrently. It also promotes modular design and reusability of solutions, as the solutions to subproblems can be reused in different contexts. Additionally, problem decomposition enables more efficient problem solving by breaking down complex problems into simpler ones, reducing the computational complexity.

Are there any applications of problem decomposition in real-world AI systems?

Yes, problem decomposition is widely used in various real-world AI systems. It is employed in natural language processing tasks, such as language translation and sentiment analysis, where the problem is decomposed into subproblems like word alignment and part-of-speech tagging. Problem decomposition is also used in computer vision tasks, such as object recognition and image segmentation, where the problem is decomposed into subproblems like feature extraction and object localization.

Problem decomposition in artificial intelligence refers to the process of breaking down a complex problem into smaller, more manageable sub-problems. This allows for easier solution development and can improve efficiency and performance in AI systems.

What are some techniques for problem decomposition in AI?

There are several techniques for problem decomposition in AI, including divide and conquer, hierarchical decomposition, task allocation, abstraction, and parallel processing. These techniques can be applied individually or in combination depending on the problem at hand.

Why is problem decomposition important in AI?

Problem decomposition is important in AI because it allows for better problem-solving strategies and improved efficiency. By breaking down complex problems into smaller, more manageable components, AI systems can more effectively analyze and solve problems, leading to more accurate and efficient results.

What are some applications of problem decomposition in AI?

Problem decomposition has a wide range of applications in AI, including natural language processing, computer vision, robotics, machine learning, and search algorithms. It can be applied to various domains and industries to improve problem-solving capabilities and overall system performance.

Related posts:

Default Thumbnail

About the author

' src=

Cohere: Bridging Language and AI for a Smarter Future

Microsoft office 365 ai. microsoft copilot, the exponential growth and progression of artificial intelligence (ai) in recent years – an in-depth analysis of the factors driving its advancement, artificial intelligence revolutionizes the systematic review process of scientific literature.

' src=

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Is Your AI-First Strategy Causing More Problems Than It’s Solving?

  • Oguz A. Acar

problem solving approaches in artificial intelligence

Consider a more balanced and thoughtful approach to AI transformation.

The problem with an AI-first strategy lies not within the “AI” but with the notion that it should come “first” aspect. An AI-first approach can be myopic, potentially leading us to overlook the true purpose of technology: to serve and enhance human endeavors. Instead, the author recommends following 3Ps during an AI transformation: problem-centric, people-first, and principle-driven.

From technology giants like Google to major management consultants like McKinsey , a rapidly growing number of companies preach an “AI-first” strategy. In essence, this means considering AI as the ultimate strategic priority , one that precedes other alternative directions. At first glance, this strategy seems logical, perhaps even inevitable. The figures speak for themselves: the sheer volume of investment flowing into AI technologies shows the confidence levels in an increasingly AI-driven future.

problem solving approaches in artificial intelligence

  • Oguz A. Acar is a Chair in Marketing at King’s Business School, King’s College London.

Partner Center

AI Accelerator Institute

Tackling global challenges with AI

Pranav Sharma

Pranav Sharma

In an increasingly interconnected globe, humanity faces a slew of complex global concerns that cut beyond geographical, cultural, and disciplinary lines. Due to their diverse character, these challenges—which include problems like socio-economic injustice, sickness, and environmental degradation—display remarkable complexity.

The unpredictable interactions of variables frequently result in cascading effects that defy conventional approaches to problem-solving. Therefore, developing successful solutions necessitates a shift from traditional approach and a thorough awareness of the complex interactions between various facets of these difficulties.

Global issues that AI could help us solve 

Topics covered in this article:

  • Role of technology, specifically artificial intelligence (AI)

Climate change and environmental degradation

Healthcare access and disease management, poverty and inequality, food security and agriculture, education and skill development, cybersecurity and privacy concerns, interconnectedness and the need for innovative solutions, role of technology, specifically artificial intelligence (ai):.

Technology emerges as a ray of hope in this complex environment of international problems. A revolutionary area of technology called AI has the power to simplify the complexity of these problems.

The development of computer systems with AI enables them to carry out operations that traditionally require human intelligence, such as pattern recognition, decision-making, and experience-based learning.

AI can quickly identify hidden correlations in enormous data sets and extract insightful information that eludes human observation. This ability presents AI as a potent instrument to not only understand the complex dynamics of global concerns but also to develop cutting-edge approaches to effectively address them.

This article begins an exploration of the innovative ways AI is being used to address a variety of urgent global challenges, ranging from revolutionizing healthcare diagnoses and treatment approaches to developing all-encompassing strategies for addressing the multifaceted challenges of climate change.

It's becoming more and more clear that AI's capabilities go beyond simple automation as it develops into a force that changes how problems are solved across a variety of industries. This article illuminates the intriguing potential for creative solutions to address urgent global issues in a more subtle and efficient way through the perspective of AI's applications.

Global challenges and the call for innovative solutions

Humanity is challenged with a complicated web of difficulties that cross borders, disciplines, and ideologies in an increasingly interconnected globe. These global issues, which range from poverty and education to climate change and healthcare access, are interwoven, complex riddles that call for creative and cooperative solutions.

As we examine these issues more closely, it becomes clear that they must be approached in creative ways, with technology, notably AI, emerging as an essential instrument to do so.

Climate change and environmental degradation may be among the most pressing and persistent global issues. Extreme weather, melting ice caps, and rising global temperatures all signal an imminent calamity.

Ecosystem and economic interdependence emphasize the need for comprehensive approaches that address not only carbon emissions but also the complex linkages between human activity and environmental systems.

Another urgent issue is gaining access to high-quality healthcare and efficient disease management. Diseases have little regard for borders, as the COVID-19 epidemic has eloquently shown.

In order to stop global health problems, ensuring equal access to medical care and cutting-edge therapies becomes crucial. The global interconnectedness of health systems emphasizes the significance of interdisciplinary efforts and technological developments in diagnosis, therapy, and vaccine development.

continue to be major global issues that threaten social cohesiveness and economic stability. The persistence of inequities is influenced by the intricate interactions of political structures, economic systems, and cultural elements. Innovative answers must take into account not only current problems, but also the underlying causes of poverty cycles and barriers to social mobility.

Agriculture is under tremendous pressure to feed a growing global population while maintaining environmental sustainability. The interconnection of ecosystems, water supplies, and food production necessitates diverse approaches that encourage sustainable farming methods, boost crop resilience, and strengthen distribution networks to reduce food waste.

The continued advancement of society and personal empowerment depends heavily on education. However, worldwide educational inequities continue, restricting prospects for disadvantaged groups.

Innovative methods that use technology to close the digital divide, provide accessible learning resources, and promote skill development for the jobs of the future are required due to the intricate relationship between education, economic development, and social integration.

cyber threats and privacy have become interconnected worldwide concerns in the digital age. Individuals, companies, and governments are vulnerable to cyberthreats that cross national borders due to the fast spread of the digital realm.

Data networks' interconnection necessitates coordinated efforts to create strong cybersecurity defenses and protect individual privacy in an increasingly interconnected society.

What underscores these global challenges is their intricate interconnectedness. They are not isolated problems but rather nodes in an intricate network where actions in one domain can trigger cascading effects across others.

Climate change, for instance, exacerbates poverty and food insecurity, leading to health crises. In turn, lack of education and healthcare hampers economic development, perpetuating inequality. This intricate web demands solutions that address multiple dimensions simultaneously.

In this situation, technology, especially AI, manifests itself as a transformative force. AI has a distinct advantage when it comes to developing creative solutions to these problems because of its ability to analyze large datasets, find hidden patterns, and simulate complex scenarios.

AI is able to simulate the effects of different interventions and model environmental patterns as they relate to climate change. It can help with early disease identification and personalized treatment in the medical field.

Platforms with AI capabilities can improve access to high-quality educational resources and close educational disparities. AI-driven algorithms can also detect online dangers and maintain data integrity.

As we navigate these challenges, collaboration becomes paramount. Governments, businesses, non-governmental organizations, and academia must join forces to develop integrated solutions that address the multidimensional aspects of these challenges.

Cross-sector partnerships can leverage AI's capabilities to enhance disaster response, optimize resource allocation, and enable data-driven policy decisions.

The world's most critical issues are interrelated systems that necessitate holistic and novel solutions rather than being isolated islands. The complex interconnections between healthcare, poverty, education, cybersecurity, and other issues highlight the need for comprehensive solutions.

A ray of optimism is provided by technology, especially AI, in the face of these difficulties. AI has the potential to bring about disruptive change in a variety of fields by being able to comprehend complex data and produce insights. Humanity may usher in a future where global concerns are solved with creative, integrated, and sustainable solutions by promoting collaboration and utilizing AI's capabilities.

problem solving approaches in artificial intelligence

AI: Transforming problem-solving through innovation

AI has emerged as a transformative technology with the potential to revolutionize problem-solving approaches across diverse domains. At its core, AI replicates human cognitive processes, enabling machines to learn, reason, and make decisions.

This ability to mimic human intelligence and its capacity to process vast amounts of data quickly sets AI apart as a game-changing tool in addressing complex challenges.

In the past, solving problems frequently involved manual analysis and rudimentary data processing. This method took a lot of time and was subject to bias. But AI signals a paradigm shift.

AI can comb through enormous databases using machine learning algorithms, picking out complex relationships and patterns that may escape human observation. The basis for creative ideas is this data-driven understanding.

AI's predictive capabilities are particularly transformative. Through historical data analysis, AI models can forecast future trends and outcomes with remarkable accuracy.

In fields like climate change, AI-driven predictive models offer insights into the behavior of complex systems, enabling informed decisions and interventions to mitigate adverse impacts. Similarly, in healthcare, AI's ability to analyze medical imaging and diagnostic data aids early disease detection, ultimately saving lives.

The ability of AI to adapt is also essential to realizing its transformational potential. Machine learning algorithms continuously enhance their performance over time by learning from fresh data.

AI can address problems with dynamic and changing characteristics, including cybersecurity risks, thanks to its versatility. AI may quickly detect and stop cyber threats in real time by keeping track of network activity and spotting irregularities, strengthening security measures.

AI's capacity to automate tasks also redefines efficiency. Mundane, repetitive tasks can be delegated to AI systems, freeing up human resources for more creative and strategic endeavors.

This is evident in industries like agriculture, where AI-powered drones monitor crop health, optimizing irrigation and pest control. Similarly, in education, personalized learning platforms adapt to individual student needs, enhancing engagement and learning outcomes.

The revolutionary impact of AI also includes tackling problems that are interrelated. Given that many global challenges are interconnected, AI's capacity to analyze complex data is important for developing all-encompassing solutions. For instance, data-driven insights from AI help poverty reduction programs identify vulnerable populations and personalize treatments to their needs.

Ethics are crucial as AI transforms problem-solving, though. To prevent biased results, AI decision-making methods must be transparent and equitable. The "AI skills gap" must also be closed, which calls for educational initiatives that give people the knowledge and abilities they require in this area.

The transformative potential of AI lies in its ability to analyze data, forecast outcomes, adapt, automate, and address interrelated difficulties. AI transforms how we approach problem-solving, enabling us to solve complicated problems with a level of efficiency and precision never before possible.

Although technology has enormous promise, ethical questions, and education must go along with its integration to ensure ethical and fair AI-driven breakthroughs. AI's influence on creating a more inventive and sustainable future is undeniable as it develops.

Predictive models powered by AI are essential for understanding and reducing the effects of climate change. These models use extensive datasets, such as historical climate patterns, to predict future trends with high accuracy.

By anticipating and monitoring natural disasters, AI can help with disaster management by assisting governments and organizations in planning evacuation routes and effectively allocating resources.

Additionally, AI plays a crucial role in smart grids, which balance energy supply and demand, cut waste, and encourage the integration of renewable energy sources, in optimizing energy use.

AI-generated image depicting climate changes

Through its skills in medical diagnosis and treatment, AI is revolutionizing healthcare . AI-powered medical imaging and pattern recognition algorithms can detect small irregularities in diagnostics that can be invisible to the human eye, allowing for the early diagnosis and treatment of diseases.

AI-driven simulations and data analysis speed up the drug discovery process, cutting down on the time and expense needed to introduce new medicines to the market. Additionally, AI enables telemedicine, allowing for remote patient monitoring and diagnosis, which is particularly important in areas with poor access to healthcare services.

AI helping doctors in disease diagnosis 

AI's data-driven insights have enormous potential for developing focused strategies to reduce poverty. Governments and organizations can identify vulnerable populations and more efficiently allocate resources by studying demographic and socioeconomic data.

Through AI-assisted skill development and job matching, AI can also contribute to the reduction of inequality. These technologies can help people find jobs that match their abilities and career goals, bridging the gap between companies and job seekers.

Precision farming is one way that AI is transforming agricultural operations. Farmers can optimize irrigation, fertilizer use, and pest management by using sensors and drones with AI technology to monitor crop health and soil conditions.

Early pest identification and disease control solutions powered by AI stop the spread of infections that endanger crops. AI supports sustainable farming practices and increases crop yields while reducing resource waste.

Drones (AI) helping in farming and agriculture

AI is transforming education by enabling individualized learning experiences. AI algorithms are used by adaptive learning systems to examine the learning habits of students and modify the curriculum to meet specific learning requirements.

This strategy caters to a variety of learning styles while improving engagement and retention. Additionally, AI-driven skill evaluation and suggestion systems give information about a person's ability gaps, providing targeted interventions and career advancement chances.

Education made easy by AI tools

Threats to cybersecurity and privacy issues are two brand-new difficulties that the digital era presents. By quickly identifying and thwarting threats in real time, AI is essential in resolving these problems.

AI systems can examine network traffic patterns to spot anomalies suggestive of cyberattacks, allowing for quick response. Additionally, data can be used for training AI models without jeopardizing individual privacy thanks to privacy-preserving AI techniques like federated learning.

AI-enhanced cybersecurity AI-enhanced

As we stand on the cusp of an AI-powered revolution in problem-solving, we have the unique opportunity to envision a future where innovation transcends boundaries and ignites a transformation toward a more sustainable and equitable world.

Imagine a world where AI-driven solutions are not just tools, but partners in our collective journey to overcome global challenges. Envision a reality where predictive algorithms not only forecast climate patterns but also guide us in shaping policies that safeguard our planet for generations to come.

Picture a healthcare landscape where AI-powered diagnostics and treatments leave no one behind, where diseases are intercepted at their inception, and access to quality care is a universal right.

In this future, poverty and inequality are addressed with precision, as AI's discerning insights help tailor interventions that lift marginalized communities out of hardship.

Visualize an agricultural realm where AI-orchestrated precision farming ensures bountiful harvests while respecting the delicate balance of nature. Consider an educational landscape where AI fosters personalized learning, nurturing each individual's potential, and empowering them to contribute meaningfully to society.

But beyond these advancements lies a more profound transformation. Envision a world where the tapestry of interconnected global challenges is rewoven with threads of AI-driven innovation.

A world where disparate issues are no longer siloed, but rather, the intersections between them are illuminated by AI's analytical brilliance. This holistic approach creates a ripple effect, where solutions in one realm cascade into benefits across others.

The intricate dance of technology and humanity fosters a new era of collaboration, where governments, industries, researchers, and citizens unite under the banner of AI-driven progress.

In this future, we not only see the potential for change, but we feel it in our lives and communities. It's a world where the divide between the privileged and the underserved narrows, where environmental stewardship is a shared responsibility, and where innovation is guided by ethical principles.

As we envision this future, let us recognize that AI is not merely a tool; it is a catalyst for a global evolution toward greater sustainability and equity. The journey to this future starts with us today – in how we harness AI's potential, in the choices we make, and in the collaborative spirit we cultivate.

Let us embark on this transformative path, driven by the belief that AI can and will lead us to a world that is not only smarter, but also kinder, fairer, and more harmonious for all.

Written by:

Get industry insights

AI Accelerator Institute icon

  • Meet the team
  • Media Guide
  • Ambassadors
  • Certification
  • Privacy Policy

Javatpoint Logo

Artificial Intelligence

Control System

  • Interview Q

Intelligent Agent

Problem-solving, adversarial search, knowledge represent, uncertain knowledge r., subsets of ai, artificial intelligence mcq, related tutorials.

JavaTpoint

  • Send your Feedback to [email protected]

Help Others, Please Share

facebook

Learn Latest Tutorials

Splunk tutorial

Transact-SQL

Tumblr tutorial

Reinforcement Learning

R Programming tutorial

R Programming

RxJS tutorial

React Native

Python Design Patterns

Python Design Patterns

Python Pillow tutorial

Python Pillow

Python Turtle tutorial

Python Turtle

Keras tutorial

Preparation

Aptitude

Verbal Ability

Interview Questions

Interview Questions

Company Interview Questions

Company Questions

Trending Technologies

Artificial Intelligence

Cloud Computing

Hadoop tutorial

Data Science

Angular 7 Tutorial

Machine Learning

DevOps Tutorial

B.Tech / MCA

DBMS tutorial

Data Structures

DAA tutorial

Operating System

Computer Network tutorial

Computer Network

Compiler Design tutorial

Compiler Design

Computer Organization and Architecture

Computer Organization

Discrete Mathematics Tutorial

Discrete Mathematics

Ethical Hacking

Ethical Hacking

Computer Graphics Tutorial

Computer Graphics

Software Engineering

Software Engineering

html tutorial

Web Technology

Cyber Security tutorial

Cyber Security

Automata Tutorial

C Programming

C++ tutorial

Data Mining

Data Warehouse Tutorial

Data Warehouse

RSS Feed

  • Frontiers in Artificial Intelligence
  • Medicine and Public Health
  • Research Topics

Soft Computing and Machine Learning Applications for Healthcare Systems

Total Downloads

Total Views and Downloads

About this Research Topic

Soft Computing (SC) is an Artificial Intelligence (AI) approach that is more effective at solving real-life problems than traditional computing models. Soft Computing models are tolerant of partial truths, impressions, uncertainty, and approximation, in handling and providing useable solutions to complex ...

Keywords : Machine Learning, Artificial Intelligence, Healthcare Systems, Bioinformatics, Health Informatics

Important Note : All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Topic coordinators, recent articles, submission deadlines, participating journals.

Manuscripts can be submitted to this Research Topic via the following journals:

total views

  • Demographics

No records found

total views article views downloads topic views

Top countries

Top referring sites, about frontiers research topics.

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 06 March 2024

Artificial intelligence and illusions of understanding in scientific research

  • Lisa Messeri   ORCID: orcid.org/0000-0002-0964-123X 1   na1 &
  • M. J. Crockett   ORCID: orcid.org/0000-0001-8800-410X 2 , 3   na1  

Nature volume  627 ,  pages 49–58 ( 2024 ) Cite this article

20k Accesses

3 Citations

707 Altmetric

Metrics details

  • Human behaviour
  • Interdisciplinary studies
  • Research management
  • Social anthropology

Scientists are enthusiastically imagining ways in which artificial intelligence (AI) tools might improve research. Why are AI tools so attractive and what are the risks of implementing them across the research pipeline? Here we develop a taxonomy of scientists’ visions for AI, observing that their appeal comes from promises to improve productivity and objectivity by overcoming human shortcomings. But proposed AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do. Such illusions obscure the scientific community’s ability to see the formation of scientific monocultures, in which some types of methods, questions and viewpoints come to dominate alternative approaches, making science less innovative and more vulnerable to errors. The proliferation of AI tools in science risks introducing a phase of scientific enquiry in which we produce more but understand less. By analysing the appeal of these tools, we provide a framework for advancing discussions of responsible knowledge production in the age of AI.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

problem solving approaches in artificial intelligence

Similar content being viewed by others

problem solving approaches in artificial intelligence

Nobel Turing Challenge: creating the engine for scientific discovery

Hiroaki Kitano

problem solving approaches in artificial intelligence

Accelerating science with human-aware artificial intelligence

Jamshid Sourati & James A. Evans

problem solving approaches in artificial intelligence

On scientific understanding with artificial intelligence

Mario Krenn, Robert Pollice, … Alán Aspuru-Guzik

Crabtree, G. Self-driving laboratories coming of age. Joule 4 , 2538–2541 (2020).

Article   CAS   Google Scholar  

Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620 , 47–60 (2023). This review explores how AI can be incorporated across the research pipeline, drawing from a wide range of scientific disciplines .

Article   CAS   PubMed   ADS   Google Scholar  

Dillion, D., Tandon, N., Gu, Y. & Gray, K. Can AI language models replace human participants? Trends Cogn. Sci. 27 , 597–600 (2023).

Article   PubMed   Google Scholar  

Grossmann, I. et al. AI and the transformation of social science research. Science 380 , 1108–1109 (2023). This forward-looking article proposes a variety of ways to incorporate generative AI into social-sciences research .

Gil, Y. Will AI write scientific papers in the future? AI Mag. 42 , 3–15 (2022).

Google Scholar  

Kitano, H. Nobel Turing Challenge: creating the engine for scientific discovery. npj Syst. Biol. Appl. 7 , 29 (2021).

Article   PubMed   PubMed Central   Google Scholar  

Benjamin, R. Race After Technology: Abolitionist Tools for the New Jim Code (Oxford Univ. Press, 2020). This book examines how social norms about race become embedded in technologies, even those that are focused on providing good societal outcomes .

Broussard, M. More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech (MIT Press, 2023).

Noble, S. U. Algorithms of Oppression: How Search Engines Reinforce Racism (New York Univ. Press, 2018).

Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the dangers of stochastic parrots: can language models be too big? in Proc. 2021 ACM Conference on Fairness, Accountability, and Transparency 610–623 (Association for Computing Machinery, 2021). One of the first comprehensive critiques of large language models, this article draws attention to a host of issues that ought to be considered before taking up such tools .

Crawford, K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale Univ. Press, 2021).

Johnson, D. G. & Verdicchio, M. Reframing AI discourse. Minds Mach. 27 , 575–590 (2017).

Article   Google Scholar  

Atanasoski, N. & Vora, K. Surrogate Humanity: Race, Robots, and the Politics of Technological Futures (Duke Univ. Press, 2019).

Mitchell, M. & Krakauer, D. C. The debate over understanding in AI’s large language models. Proc. Natl Acad. Sci. USA 120 , e2215907120 (2023).

Kidd, C. & Birhane, A. How AI can distort human beliefs. Science 380 , 1222–1223 (2023).

Birhane, A., Kasirzadeh, A., Leslie, D. & Wachter, S. Science in the age of large language models. Nat. Rev. Phys. 5 , 277–280 (2023).

Kapoor, S. & Narayanan, A. Leakage and the reproducibility crisis in machine-learning-based science. Patterns 4 , 100804 (2023).

Hullman, J., Kapoor, S., Nanayakkara, P., Gelman, A. & Narayanan, A. The worst of both worlds: a comparative analysis of errors in learning from data in psychology and machine learning. In Proc. 2022 AAAI/ACM Conference on AI, Ethics, and Society (eds Conitzer, V. et al.) 335–348 (Association for Computing Machinery, 2022).

Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1 , 206–215 (2019). This paper articulates the problems with attempting to explain AI systems that lack interpretability, and advocates for building interpretable models instead .

Crockett, M. J., Bai, X., Kapoor, S., Messeri, L. & Narayanan, A. The limitations of machine learning models for predicting scientific replicability. Proc. Natl Acad. Sci. USA 120 , e2307596120 (2023).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Lazar, S. & Nelson, A. AI safety on whose terms? Science 381 , 138 (2023).

Article   PubMed   ADS   Google Scholar  

Collingridge, D. The Social Control of Technology (St Martin’s Press, 1980).

Wagner, G., Lukyanenko, R. & Paré, G. Artificial intelligence and the conduct of literature reviews. J. Inf. Technol. 37 , 209–226 (2022).

Hutson, M. Artificial-intelligence tools aim to tame the coronavirus literature. Nature https://doi.org/10.1038/d41586-020-01733-7 (2020).

Haas, Q. et al. Utilizing artificial intelligence to manage COVID-19 scientific evidence torrent with Risklick AI: a critical tool for pharmacology and therapy development. Pharmacology 106 , 244–253 (2021).

Article   CAS   PubMed   Google Scholar  

Müller, H., Pachnanda, S., Pahl, F. & Rosenqvist, C. The application of artificial intelligence on different types of literature reviews – a comparative study. In 2022 International Conference on Applied Artificial Intelligence (ICAPAI) https://doi.org/10.1109/ICAPAI55158.2022.9801564 (Institute of Electrical and Electronics Engineers, 2022).

van Dinter, R., Tekinerdogan, B. & Catal, C. Automation of systematic literature reviews: a systematic literature review. Inf. Softw. Technol. 136 , 106589 (2021).

Aydın, Ö. & Karaarslan, E. OpenAI ChatGPT generated literature review: digital twin in healthcare. In Emerging Computer Technologies 2 (ed. Aydın, Ö.) 22–31 (İzmir Akademi Dernegi, 2022).

AlQuraishi, M. AlphaFold at CASP13. Bioinformatics 35 , 4862–4865 (2019).

Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596 , 583–589 (2021).

Article   CAS   PubMed   PubMed Central   ADS   Google Scholar  

Lee, J. S., Kim, J. & Kim, P. M. Score-based generative modeling for de novo protein design. Nat. Computat. Sci. 3 , 382–392 (2023).

Gómez-Bombarelli, R. et al. Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach. Nat. Mater. 15 , 1120–1127 (2016).

Krenn, M. et al. On scientific understanding with artificial intelligence. Nat. Rev. Phys. 4 , 761–769 (2022).

Extance, A. How AI technology can tame the scientific literature. Nature 561 , 273–274 (2018).

Hastings, J. AI for Scientific Discovery (CRC Press, 2023). This book reviews current and future incorporation of AI into the scientific research pipeline .

Ahmed, A. et al. The future of academic publishing. Nat. Hum. Behav. 7 , 1021–1026 (2023).

Gray, K., Yam, K. C., Zhen’An, A. E., Wilbanks, D. & Waytz, A. The psychology of robots and artificial intelligence. In The Handbook of Social Psychology (eds Gilbert, D. et al.) (in the press).

Argyle, L. P. et al. Out of one, many: using language models to simulate human samples. Polit. Anal. 31 , 337–351 (2023).

Aher, G., Arriaga, R. I. & Kalai, A. T. Using large language models to simulate multiple humans and replicate human subject studies. In Proc. 40th International Conference on Machine Learning (eds Krause, A. et al.) 337–371 (JMLR.org, 2023).

Binz, M. & Schulz, E. Using cognitive psychology to understand GPT-3. Proc. Natl Acad. Sci. USA 120 , e2218523120 (2023).

Ornstein, J. T., Blasingame, E. N. & Truscott, J. S. How to train your stochastic parrot: large language models for political texts. Github , https://joeornstein.github.io/publications/ornstein-blasingame-truscott.pdf (2023).

He, S. et al. Learning to predict the cosmological structure formation. Proc. Natl Acad. Sci. USA 116 , 13825–13832 (2019).

Article   MathSciNet   CAS   PubMed   PubMed Central   ADS   Google Scholar  

Mahmood, F. et al. Deep adversarial training for multi-organ nuclei segmentation in histopathology images. IEEE Trans. Med. Imaging 39 , 3257–3267 (2020).

Teixeira, B. et al. Generating synthetic X-ray images of a person from the surface geometry. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 9059–9067 (Institute of Electrical and Electronics Engineers, 2018).

Marouf, M. et al. Realistic in silico generation and augmentation of single-cell RNA-seq data using generative adversarial networks. Nat. Commun. 11 , 166 (2020).

Watts, D. J. A twenty-first century science. Nature 445 , 489 (2007).

boyd, d. & Crawford, K. Critical questions for big data. Inf. Commun. Soc. 15 , 662–679 (2012). This article assesses the ethical and epistemic implications of scientific and societal moves towards big data and provides a parallel case study for thinking about the risks of artificial intelligence .

Jolly, E. & Chang, L. J. The Flatland fallacy: moving beyond low–dimensional thinking. Top. Cogn. Sci. 11 , 433–454 (2019).

Yarkoni, T. & Westfall, J. Choosing prediction over explanation in psychology: lessons from machine learning. Perspect. Psychol. Sci. 12 , 1100–1122 (2017).

Radivojac, P. et al. A large-scale evaluation of computational protein function prediction. Nat. Methods 10 , 221–227 (2013).

Bileschi, M. L. et al. Using deep learning to annotate the protein universe. Nat. Biotechnol. 40 , 932–937 (2022).

Barkas, N. et al. Joint analysis of heterogeneous single-cell RNA-seq dataset collections. Nat. Methods 16 , 695–698 (2019).

Demszky, D. et al. Using large language models in psychology. Nat. Rev. Psychol. 2 , 688–701 (2023).

Karjus, A. Machine-assisted mixed methods: augmenting humanities and social sciences with artificial intelligence. Preprint at https://arxiv.org/abs/2309.14379 (2023).

Davies, A. et al. Advancing mathematics by guiding human intuition with AI. Nature 600 , 70–74 (2021).

Peterson, J. C., Bourgin, D. D., Agrawal, M., Reichman, D. & Griffiths, T. L. Using large-scale experiments and machine learning to discover theories of human decision-making. Science 372 , 1209–1214 (2021).

Ilyas, A. et al. Adversarial examples are not bugs, they are features. Preprint at https://doi.org/10.48550/arXiv.1905.02175 (2019)

Semel, B. M. Listening like a computer: attentional tensions and mechanized care in psychiatric digital phenotyping. Sci. Technol. Hum. Values 47 , 266–290 (2022).

Gil, Y. Thoughtful artificial intelligence: forging a new partnership for data science and scientific discovery. Data Sci. 1 , 119–129 (2017).

Checco, A., Bracciale, L., Loreti, P., Pinfield, S. & Bianchi, G. AI-assisted peer review. Humanit. Soc. Sci. Commun. 8 , 25 (2021).

Thelwall, M. Can the quality of published academic journal articles be assessed with machine learning? Quant. Sci. Stud. 3 , 208–226 (2022).

Dhar, P. Peer review of scholarly research gets an AI boost. IEEE Spectrum spectrum.ieee.org/peer-review-of-scholarly-research-gets-an-ai-boost (2020).

Heaven, D. AI peer reviewers unleashed to ease publishing grind. Nature 563 , 609–610 (2018).

Conroy, G. How ChatGPT and other AI tools could disrupt scientific publishing. Nature 622 , 234–236 (2023).

Nosek, B. A. et al. Replicability, robustness, and reproducibility in psychological science. Annu. Rev. Psychol. 73 , 719–748 (2022).

Altmejd, A. et al. Predicting the replicability of social science lab experiments. PLoS ONE 14 , e0225826 (2019).

Yang, Y., Youyou, W. & Uzzi, B. Estimating the deep replicability of scientific findings using human and artificial intelligence. Proc. Natl Acad. Sci. USA 117 , 10762–10768 (2020).

Youyou, W., Yang, Y. & Uzzi, B. A discipline-wide investigation of the replicability of psychology papers over the past two decades. Proc. Natl Acad. Sci. USA 120 , e2208863120 (2023).

Rabb, N., Fernbach, P. M. & Sloman, S. A. Individual representation in a community of knowledge. Trends Cogn. Sci. 23 , 891–902 (2019). This comprehensive review paper documents the empirical evidence for distributed cognition in communities of knowledge and the resultant vulnerabilities to illusions of understanding .

Rozenblit, L. & Keil, F. The misunderstood limits of folk science: an illusion of explanatory depth. Cogn. Sci. 26 , 521–562 (2002). This paper provided an empirical demonstration of the illusion of explanatory depth, and inspired a programme of research in cognitive science on communities of knowledge .

Hutchins, E. Cognition in the Wild (MIT Press, 1995).

Lave, J. & Wenger, E. Situated Learning: Legitimate Peripheral Participation (Cambridge Univ. Press, 1991).

Kitcher, P. The division of cognitive labor. J. Philos. 87 , 5–22 (1990).

Hardwig, J. Epistemic dependence. J. Philos. 82 , 335–349 (1985).

Keil, F. in Oxford Studies In Epistemology (eds Gendler, T. S. & Hawthorne, J.) 143–166 (Oxford Academic, 2005).

Weisberg, M. & Muldoon, R. Epistemic landscapes and the division of cognitive labor. Philos. Sci. 76 , 225–252 (2009).

Sloman, S. A. & Rabb, N. Your understanding is my understanding: evidence for a community of knowledge. Psychol. Sci. 27 , 1451–1460 (2016).

Wilson, R. A. & Keil, F. The shadows and shallows of explanation. Minds Mach. 8 , 137–159 (1998).

Keil, F. C., Stein, C., Webb, L., Billings, V. D. & Rozenblit, L. Discerning the division of cognitive labor: an emerging understanding of how knowledge is clustered in other minds. Cogn. Sci. 32 , 259–300 (2008).

Sperber, D. et al. Epistemic vigilance. Mind Lang. 25 , 359–393 (2010).

Wilkenfeld, D. A., Plunkett, D. & Lombrozo, T. Depth and deference: when and why we attribute understanding. Philos. Stud. 173 , 373–393 (2016).

Sparrow, B., Liu, J. & Wegner, D. M. Google effects on memory: cognitive consequences of having information at our fingertips. Science 333 , 776–778 (2011).

Fisher, M., Goddu, M. K. & Keil, F. C. Searching for explanations: how the internet inflates estimates of internal knowledge. J. Exp. Psychol. Gen. 144 , 674–687 (2015).

De Freitas, J., Agarwal, S., Schmitt, B. & Haslam, N. Psychological factors underlying attitudes toward AI tools. Nat. Hum. Behav. 7 , 1845–1854 (2023).

Castelo, N., Bos, M. W. & Lehmann, D. R. Task-dependent algorithm aversion. J. Mark. Res. 56 , 809–825 (2019).

Cadario, R., Longoni, C. & Morewedge, C. K. Understanding, explaining, and utilizing medical artificial intelligence. Nat. Hum. Behav. 5 , 1636–1642 (2021).

Oktar, K. & Lombrozo, T. Deciding to be authentic: intuition is favored over deliberation when authenticity matters. Cognition 223 , 105021 (2022).

Bigman, Y. E., Yam, K. C., Marciano, D., Reynolds, S. J. & Gray, K. Threat of racial and economic inequality increases preference for algorithm decision-making. Comput. Hum. Behav. 122 , 106859 (2021).

Claudy, M. C., Aquino, K. & Graso, M. Artificial intelligence can’t be charmed: the effects of impartiality on laypeople’s algorithmic preferences. Front. Psychol. 13 , 898027 (2022).

Snyder, C., Keppler, S. & Leider, S. Algorithm reliance under pressure: the effect of customer load on service workers. Preprint at SSRN https://doi.org/10.2139/ssrn.4066823 (2022).

Bogert, E., Schecter, A. & Watson, R. T. Humans rely more on algorithms than social influence as a task becomes more difficult. Sci Rep. 11 , 8028 (2021).

Raviv, A., Bar‐Tal, D., Raviv, A. & Abin, R. Measuring epistemic authority: studies of politicians and professors. Eur. J. Personal. 7 , 119–138 (1993).

Cummings, L. The “trust” heuristic: arguments from authority in public health. Health Commun. 29 , 1043–1056 (2014).

Lee, M. K. Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 5 , https://doi.org/10.1177/2053951718756684 (2018).

Kissinger, H. A., Schmidt, E. & Huttenlocher, D. The Age of A.I. And Our Human Future (Little, Brown, 2021).

Lombrozo, T. Explanatory preferences shape learning and inference. Trends Cogn. Sci. 20 , 748–759 (2016). This paper provides an overview of philosophical theories of explanatory virtues and reviews empirical evidence on the sorts of explanations people find satisfying .

Vrantsidis, T. H. & Lombrozo, T. Simplicity as a cue to probability: multiple roles for simplicity in evaluating explanations. Cogn. Sci. 46 , e13169 (2022).

Johnson, S. G. B., Johnston, A. M., Toig, A. E. & Keil, F. C. Explanatory scope informs causal strength inferences. In Proc. 36th Annual Meeting of the Cognitive Science Society 2453–2458 (Cognitive Science Society, 2014).

Khemlani, S. S., Sussman, A. B. & Oppenheimer, D. M. Harry Potter and the sorcerer’s scope: latent scope biases in explanatory reasoning. Mem. Cognit. 39 , 527–535 (2011).

Liquin, E. G. & Lombrozo, T. Motivated to learn: an account of explanatory satisfaction. Cogn. Psychol. 132 , 101453 (2022).

Hopkins, E. J., Weisberg, D. S. & Taylor, J. C. V. The seductive allure is a reductive allure: people prefer scientific explanations that contain logically irrelevant reductive information. Cognition 155 , 67–76 (2016).

Weisberg, D. S., Hopkins, E. J. & Taylor, J. C. V. People’s explanatory preferences for scientific phenomena. Cogn. Res. Princ. Implic. 3 , 44 (2018).

Jerez-Fernandez, A., Angulo, A. N. & Oppenheimer, D. M. Show me the numbers: precision as a cue to others’ confidence. Psychol. Sci. 25 , 633–635 (2014).

Kim, J., Giroux, M. & Lee, J. C. When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations. Psychol. Mark. 38 , 1140–1155 (2021).

Nguyen, C. T. The seductions of clarity. R. Inst. Philos. Suppl. 89 , 227–255 (2021). This article describes how reductive and quantitative explanations can generate a sense of understanding that is not necessarily correlated with actual understanding .

Fisher, M., Smiley, A. H. & Grillo, T. L. H. Information without knowledge: the effects of internet search on learning. Memory 30 , 375–387 (2022).

Eliseev, E. D. & Marsh, E. J. Understanding why searching the internet inflates confidence in explanatory ability. Appl. Cogn. Psychol. 37 , 711–720 (2023).

Fisher, M. & Oppenheimer, D. M. Who knows what? Knowledge misattribution in the division of cognitive labor. J. Exp. Psychol. Appl. 27 , 292–306 (2021).

Chromik, M., Eiband, M., Buchner, F., Krüger, A. & Butz, A. I think I get your point, AI! The illusion of explanatory depth in explainable AI. In 26th International Conference on Intelligent User Interfaces (eds Hammond, T. et al.) 307–317 (Association for Computing Machinery, 2021).

Strevens, M. No understanding without explanation. Stud. Hist. Philos. Sci. A 44 , 510–515 (2013).

Ylikoski, P. in Scientific Understanding: Philosophical Perspectives (eds De Regt, H. et al.) 100–119 (Univ. Pittsburgh Press, 2009).

Giudice, M. D. The prediction–explanation fallacy: a pervasive problem in scientific applications of machine learning. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/4vq8f (2021).

Hofman, J. M. et al. Integrating explanation and prediction in computational social science. Nature 595 , 181–188 (2021). This paper highlights the advantages and disadvantages of explanatory versus predictive approaches to modelling, with a focus on applications to computational social science .

Shmueli, G. To explain or to predict? Stat. Sci. 25 , 289–310 (2010).

Article   MathSciNet   Google Scholar  

Hofman, J. M., Sharma, A. & Watts, D. J. Prediction and explanation in social systems. Science 355 , 486–488 (2017).

Logg, J. M., Minson, J. A. & Moore, D. A. Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151 , 90–103 (2019).

Nguyen, C. T. Cognitive islands and runaway echo chambers: problems for epistemic dependence on experts. Synthese 197 , 2803–2821 (2020).

Breiman, L. Statistical modeling: the two cultures. Stat. Sci. 16 , 199–215 (2001).

Gao, J. & Wang, D. Quantifying the benefit of artificial intelligence for scientific research. Preprint at arxiv.org/abs/2304.10578 (2023).

Hanson, B. et al. Garbage in, garbage out: mitigating risks and maximizing benefits of AI in research. Nature 623 , 28–31 (2023).

Kleinberg, J. & Raghavan, M. Algorithmic monoculture and social welfare. Proc. Natl Acad. Sci. USA 118 , e2018340118 (2021). This paper uses formal modelling methods to demonstrate that when companies all rely on the same algorithm to make decisions (an algorithmic monoculture), the overall quality of those decisions is reduced because valuable options can slip through the cracks, even when the algorithm performs accurately for individual companies .

Article   MathSciNet   CAS   PubMed   PubMed Central   Google Scholar  

Hofstra, B. et al. The diversity–innovation paradox in science. Proc. Natl Acad. Sci. USA 117 , 9284–9291 (2020).

Hong, L. & Page, S. E. Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proc. Natl Acad. Sci. USA 101 , 16385–16389 (2004).

Page, S. E. Where diversity comes from and why it matters? Eur. J. Soc. Psychol. 44 , 267–279 (2014). This article reviews research demonstrating the benefits of cognitive diversity and diversity in methodological approaches for problem solving and innovation .

Clarke, A. E. & Fujimura, J. H. (eds) The Right Tools for the Job: At Work in Twentieth-Century Life Sciences (Princeton Univ. Press, 2014).

Silva, V. J., Bonacelli, M. B. M. & Pacheco, C. A. Framing the effects of machine learning on science. AI Soc. https://doi.org/10.1007/s00146-022-01515-x (2022).

Sassenberg, K. & Ditrich, L. Research in social psychology changed between 2011 and 2016: larger sample sizes, more self-report measures, and more online studies. Adv. Methods Pract. Psychol. Sci. 2 , 107–114 (2019).

Simon, A. F. & Wilder, D. Methods and measures in social and personality psychology: a comparison of JPSP publications in 1982 and 2016. J. Soc. Psychol. https://doi.org/10.1080/00224545.2022.2135088 (2022).

Anderson, C. A. et al. The MTurkification of social and personality psychology. Pers. Soc. Psychol. Bull. 45 , 842–850 (2019).

Latour, B. in The Social After Gabriel Tarde: Debates and Assessments (ed. Candea, M.) 145–162 (Routledge, 2010).

Porter, T. M. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (Princeton Univ. Press, 1996).

Lazer, D. et al. Meaningful measures of human society in the twenty-first century. Nature 595 , 189–196 (2021).

Knox, D., Lucas, C. & Cho, W. K. T. Testing causal theories with learned proxies. Annu. Rev. Polit. Sci. 25 , 419–441 (2022).

Barberá, P. Birds of the same feather tweet together: Bayesian ideal point estimation using Twitter data. Polit. Anal. 23 , 76–91 (2015).

Brady, W. J., McLoughlin, K., Doan, T. N. & Crockett, M. J. How social learning amplifies moral outrage expression in online social networks. Sci. Adv. 7 , eabe5641 (2021).

Article   PubMed   PubMed Central   ADS   Google Scholar  

Barnes, J., Klinger, R. & im Walde, S. S. Assessing state-of-the-art sentiment models on state-of-the-art sentiment datasets. In Proc. 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (eds Balahur, A. et al.) 2–12 (Association for Computational Linguistics, 2017).

Gitelman, L. (ed.) “Raw Data” is an Oxymoron (MIT Press, 2013).

Breznau, N. et al. Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty. Proc. Natl Acad. Sci. USA 119 , e2203150119 (2022). This study demonstrates how 73 research teams analysing the same dataset reached different conclusions about the relationship between immigration and public support for social policies, highlighting the subjectivity and uncertainty involved in analysing complex datasets .

Gillespie, T. in Media Technologies: Essays on Communication, Materiality, and Society (eds Gillespie, T. et al.) 167–194 (MIT Press, 2014).

Leonelli, S. Data-Centric Biology: A Philosophical Study (Univ. Chicago Press, 2016).

Wang, A., Kapoor, S., Barocas, S. & Narayanan, A. Against predictive optimization: on the legitimacy of decision-making algorithms that optimize predictive accuracy. ACM J. Responsib. Comput. , https://doi.org/10.1145/3636509 (2023).

Athey, S. Beyond prediction: using big data for policy problems. Science 355 , 483–485 (2017).

del Rosario Martínez-Ordaz, R. Scientific understanding through big data: from ignorance to insights to understanding. Possibility Stud. Soc. 1 , 279–299 (2023).

Nussberger, A.-M., Luo, L., Celis, L. E. & Crockett, M. J. Public attitudes value interpretability but prioritize accuracy in artificial intelligence. Nat. Commun. 13 , 5821 (2022).

Zittrain, J. in The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives (eds. Voeneky, S. et al.) 176–184 (Cambridge Univ. Press, 2022). This article articulates the epistemic risks of prioritizing predictive accuracy over explanatory understanding when AI tools are interacting in complex systems.

Shumailov, I. et al. The curse of recursion: training on generated data makes models forget. Preprint at arxiv.org/abs/2305.17493 (2023).

Latour, B. Science In Action: How to Follow Scientists and Engineers Through Society (Harvard Univ. Press, 1987). This book provides strategies and approaches for thinking about science as a social endeavour .

Franklin, S. Science as culture, cultures of science. Annu. Rev. Anthropol. 24 , 163–184 (1995).

Haraway, D. Situated knowledges: the science question in feminism and the privilege of partial perspective. Fem. Stud. 14 , 575–599 (1988). This article acknowledges that the objective ‘view from nowhere’ is unobtainable: knowledge, it argues, is always situated .

Harding, S. Objectivity and Diversity: Another Logic of Scientific Research (Univ. Chicago Press, 2015).

Longino, H. E. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry (Princeton Univ. Press, 1990).

Daston, L. & Galison, P. Objectivity (Princeton Univ. Press, 2007). This book is a historical analysis of the shifting modes of ‘objectivity’ that scientists have pursued, arguing that objectivity is not a universal concept but that it shifts alongside scientific techniques and ambitions .

Prescod-Weinstein, C. Making Black women scientists under white empiricism: the racialization of epistemology in physics. Signs J. Women Cult. Soc. 45 , 421–447 (2020).

Mavhunga, C. What Do Science, Technology, and Innovation Mean From Africa? (MIT Press, 2017).

Schiebinger, L. The Mind Has No Sex? Women in the Origins of Modern Science (Harvard Univ. Press, 1991).

Martin, E. The egg and the sperm: how science has constructed a romance based on stereotypical male–female roles. Signs J. Women Cult. Soc. 16 , 485–501 (1991). This case study shows how assumptions about gender affect scientific theories, sometimes delaying the articulation of what might be considered to be more accurate descriptions of scientific phenomena .

Harding, S. Rethinking standpoint epistemology: What is “strong objectivity”? Centen. Rev. 36 , 437–470 (1992). In this article, Harding outlines her position on ‘strong objectivity’, by which clearly articulating one’s standpoint can lead to more robust knowledge claims .

Oreskes, N. Why Trust Science? (Princeton Univ. Press, 2019). This book introduces the reader to 20 years of scholarship in science and technology studies, arguing that the tools the discipline has for understanding science can help to reinstate public trust in the institution .

Rolin, K., Koskinen, I., Kuorikoski, J. & Reijula, S. Social and cognitive diversity in science: introduction. Synthese 202 , 36 (2023).

Hong, L. & Page, S. E. Problem solving by heterogeneous agents. J. Econ. Theory 97 , 123–163 (2001).

Sulik, J., Bahrami, B. & Deroy, O. The diversity gap: when diversity matters for knowledge. Perspect. Psychol. Sci. 17 , 752–767 (2022).

Lungeanu, A., Whalen, R., Wu, Y. J., DeChurch, L. A. & Contractor, N. S. Diversity, networks, and innovation: a text analytic approach to measuring expertise diversity. Netw. Sci. 11 , 36–64 (2023).

AlShebli, B. K., Rahwan, T. & Woon, W. L. The preeminence of ethnic diversity in scientific collaboration. Nat. Commun. 9 , 5163 (2018).

Campbell, L. G., Mehtani, S., Dozier, M. E. & Rinehart, J. Gender-heterogeneous working groups produce higher quality science. PLoS ONE 8 , e79147 (2013).

Nielsen, M. W., Bloch, C. W. & Schiebinger, L. Making gender diversity work for scientific discovery and innovation. Nat. Hum. Behav. 2 , 726–734 (2018).

Yang, Y., Tian, T. Y., Woodruff, T. K., Jones, B. F. & Uzzi, B. Gender-diverse teams produce more novel and higher-impact scientific ideas. Proc. Natl Acad. Sci. USA 119 , e2200841119 (2022).

Kozlowski, D., Larivière, V., Sugimoto, C. R. & Monroe-White, T. Intersectional inequalities in science. Proc. Natl Acad. Sci. USA 119 , e2113067119 (2022).

Fehr, C. & Jones, J. M. Culture, exploitation, and epistemic approaches to diversity. Synthese 200 , 465 (2022).

Nakadai, R., Nakawake, Y. & Shibasaki, S. AI language tools risk scientific diversity and innovation. Nat. Hum. Behav. 7 , 1804–1805 (2023).

National Academies of Sciences, Engineering, and Medicine et al. Advancing Antiracism, Diversity, Equity, and Inclusion in STEMM Organizations: Beyond Broadening Participation (National Academies Press, 2023).

Winner, L. Do artifacts have politics? Daedalus 109 , 121–136 (1980).

Eubanks, V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s Press, 2018).

Littmann, M. et al. Validity of machine learning in biology and medicine increased through collaborations across fields of expertise. Nat. Mach. Intell. 2 , 18–24 (2020).

Carusi, A. et al. Medical artificial intelligence is as much social as it is technological. Nat. Mach. Intell. 5 , 98–100 (2023).

Raghu, M. & Schmidt, E. A survey of deep learning for scientific discovery. Preprint at arxiv.org/abs/2003.11755 (2020).

Bishop, C. AI4Science to empower the fifth paradigm of scientific discovery. Microsoft Research Blog www.microsoft.com/en-us/research/blog/ai4science-to-empower-the-fifth-paradigm-of-scientific-discovery/ (2022).

Whittaker, M. The steep cost of capture. Interactions 28 , 50–55 (2021).

Liesenfeld, A., Lopez, A. & Dingemanse, M. Opening up ChatGPT: Tracking openness, transparency, and accountability in instruction-tuned text generators. In Proc. 5th International Conference on Conversational User Interfaces 1–6 (Association for Computing Machinery, 2023).

Chu, J. S. G. & Evans, J. A. Slowed canonical progress in large fields of science. Proc. Natl Acad. Sci. USA 118 , e2021636118 (2021).

Park, M., Leahey, E. & Funk, R. J. Papers and patents are becoming less disruptive over time. Nature 613 , 138–144 (2023).

Frith, U. Fast lane to slow science. Trends Cogn. Sci. 24 , 1–2 (2020). This article explains the epistemic risks of a hyperfocus on scientific productivity and explores possible avenues for incentivizing the production of higher-quality science on a slower timescale .

Stengers, I. Another Science is Possible: A Manifesto for Slow Science (Wiley, 2018).

Lake, B. M. & Baroni, M. Human-like systematic generalization through a meta-learning neural network. Nature 623 , 115–121 (2023).

Feinman, R. & Lake, B. M. Learning task-general representations with generative neuro-symbolic modeling. Preprint at arxiv.org/abs/2006.14448 (2021).

Schölkopf, B. et al. Toward causal representation learning. Proc. IEEE 109 , 612–634 (2021).

Mitchell, M. AI’s challenge of understanding the world. Science 382 , eadm8175 (2023).

Sartori, L. & Bocca, G. Minding the gap(s): public perceptions of AI and socio-technical imaginaries. AI Soc. 38 , 443–458 (2023).

Download references

Acknowledgements

We thank D. S. Bassett, W. J. Brady, S. Helmreich, S. Kapoor, T. Lombrozo, A. Narayanan, M. Salganik and A. J. te Velthuis for comments. We also thank C. Buckner and P. Winter for their feedback and suggestions.

Author information

These authors contributed equally: Lisa Messeri, M. J. Crockett

Authors and Affiliations

Department of Anthropology, Yale University, New Haven, CT, USA

Lisa Messeri

Department of Psychology, Princeton University, Princeton, NJ, USA

M. J. Crockett

University Center for Human Values, Princeton University, Princeton, NJ, USA

You can also search for this author in PubMed   Google Scholar

Contributions

The authors contributed equally to the research and writing of the paper.

Corresponding authors

Correspondence to Lisa Messeri or M. J. Crockett .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature thanks Cameron Buckner, Peter Winter and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Messeri, L., Crockett, M.J. Artificial intelligence and illusions of understanding in scientific research. Nature 627 , 49–58 (2024). https://doi.org/10.1038/s41586-024-07146-0

Download citation

Received : 31 July 2023

Accepted : 31 January 2024

Published : 06 March 2024

Issue Date : 07 March 2024

DOI : https://doi.org/10.1038/s41586-024-07146-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Ai is no substitute for having something to say.

Nature Reviews Physics (2024)

Perché gli scienziati si fidano troppo dell'intelligenza artificiale - e come rimediare

Nature Italy (2024)

Why scientists trust AI too much — and what to do about it

Nature (2024)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

problem solving approaches in artificial intelligence

Black-winged kite algorithm: a nature-inspired meta-heuristic for solving benchmark functions and engineering problems

  • Open access
  • Published: 23 March 2024
  • Volume 57 , article number  98 , ( 2024 )

Cite this article

You have full access to this open access article

  • Jun Wang 1 ,
  • Wen-chuan Wang   ORCID: orcid.org/0000-0003-1367-5886 1 ,
  • Xiao-xue Hu 1 ,
  • Lin Qiu 1 &
  • Hong-fei Zang 1  

This paper innovatively proposes the Black Kite Algorithm (BKA), a meta-heuristic optimization algorithm inspired by the migratory and predatory behavior of the black kite. The BKA integrates the Cauchy mutation strategy and the Leader strategy to enhance the global search capability and the convergence speed of the algorithm. This novel combination achieves a good balance between exploring global solutions and utilizing local information. Against the standard test function sets of CEC-2022 and CEC-2017, as well as other complex functions, BKA attained the best performance in 66.7, 72.4 and 77.8% of the cases, respectively. The effectiveness of the algorithm is validated through detailed convergence analysis and statistical comparisons. Moreover, its application in solving five practical engineering design problems demonstrates its practical potential in addressing constrained challenges in the real world and indicates that it has significant competitive strength in comparison with existing optimization techniques. In summary, the BKA has proven its practical value and advantages in solving a variety of complex optimization problems due to its excellent performance. The source code of BKA is publicly available at https://www.mathworks.com/matlabcentral/fileexchange/161401-black-winged-kite-algorithm-bka .

Avoid common mistakes on your manuscript.

1 Introduction

In recent years, due to resource scarcity and increasing demand from people (Feng et al. 2024 ), improving production efficiency has become a research hotspot (Zhao et al. 2023a , b ). As technology advances and problems become more complex, optimization tasks frequently exhibit multi-objective, large-scale, uncertain, and complicated traits to parse (Wan et al. 2023 ). In the real world, many problems have multiple optimization objectives and constraints, while traditional optimization algorithms (Inceyol and Cay 2022 ; Wang et al. 2022 ) are mainly designed for a single objective or a small number of objective issues (Atban et al. 2023 ; Hu et al. 2023 ; Wang et al. 2023a , b ). Traditional algorithms may not be able to accurately find the optimal solution when faced with these challenging optimization tasks, or the solving procedure may be overly complicated and time-consuming. Secondly, the search space for some problems is vast, and traditional optimization algorithms find it challenging to efficiently search for the optimal solution in this situation. In addition, once the problem involves uncertainty and fuzziness (Berger and Bosetti 2020 ), traditional optimization algorithms cannot handle it well. This is because conventional optimization algorithms are mainly based on deterministic assumptions and constraints. At the same time, there are always uncertainties and randomness in areas such as venture capital (Xu et al. 2023a , b ), supply chain management (Zaman et al. 2023 ), and resource scheduling (Al-Masri et al. 2023 ). Finally, traditional optimization algorithms typically rely on the analytical form of the problem, which requires the problem to be clearly defined and described in mathematical form (Kumar et al. 2023 ). In practical situations, it is often difficult to express the problem analytically, or the problem's objective function and constraint conditions are intricate (Wang et al. 2020 ). In summary, traditional optimization algorithms often cannot meet the needs and challenges of current optimization tasks.

In this context, meta-heuristic optimization algorithms (Fan and Zhou 2023 ) have rapidly developed due to their flexibility and gradient-free mechanisms. They have become essential tools for solving production efficiency improvement problems. The flexibility of meta-heuristic optimization algorithms enables them to adapt to diverse production environments and problem scenarios (Melman and Evsutin 2023 ). Meta-heuristic optimization algorithms can search and explore the problem space based on the characteristics of specific problems to find the best solution or a solution that comes close to the best one (Abdel-Basset et al. 2023a , b , c ). Whether facing problems such as product design, production planning, resource allocation, or supply chain management, meta-heuristic optimization algorithms can flexibly adjust and optimize according to actual situations.

Meanwhile, the meta-heuristic optimization algorithm also has the characteristic of no gradient mechanism (Liu and Xu 2023 ), which allows it to deal with problems without explicit gradient information or continuous derivatives. In many production environments, obtaining gradient information on the issues through analytical methods using traditional optimization methods is difficult. The meta-heuristic optimization algorithm utilizes local knowledge about the problem for optimization through heuristic search and random exploration. In addition to high-dimensional and nonlinear problems, this gradient-free optimization method is also appropriate for discrete and constraint-based problems (Boulkroune et al. 2023 ).

1.1 Meta-heuristic methods

An optimization algorithm based on a heuristic search is called a meta-heuristic optimization algorithm (Wang et al. 2023a , b ). They usually do not have any special requirements for the objective function but instead search by simulating intelligent behavior in nature (Chen et al. 2023 ) or other phenomena. They are more likely to find a globally optimal solution with a broader range of applications and a certain probability of escaping the local optimum. The characteristic of meta-heuristic optimization algorithms is their global solid search ability and robustness (Xu 2023a , b ; Zhao et al.  2023a , b ), which can find optimal solutions in large-scale, high-dimensional problems and quickly solve problems that do not exist or have not yet found polynomial time-solving algorithms. The classification diagram for the meta-heuristic optimization algorithm is shown in Fig. 1 . Meta-heuristic algorithms, which combine random algorithms with local algorithms to solve challenging optimization problems, are inspired by random phenomena in nature (Bingi, et al. 2023 ). They can be broadly classified into the following four types based on their various sources of inspiration:

figure 1

Classification of metaheuristic algorithms

The algorithm is designed based on the behavioral characteristics of biological populations. These models simulate organisms' collective intelligence and collaborative strategies, enabling the rapid search of problem space and finding global optimal or better approximate solutions. Biologically inspired optimization models perform well in handling continuous and global search problems. Zamani et al. ( 2022 ) present a novel bio-inspired algorithm inspired by starlings’ behaviors during their stunning murmuration named Starling Murmuration Optimizer (SMO) to solve complex and engineering optimization problems as the most appropriate application of metaheuristic algorithms. The SMO introduces a dynamic multi-flock construction and three new search strategies: separating, diving, and whirling. Sand Cat Swarm Optimization (Seyyedabbasi and Kiani 2023 ) is a meta-heuristic algorithm based on sand cats' natural behavior. This algorithm was influenced by sand cats' capacity to recognize low-frequency noise. Due to its unique traits, the sand cat can find prey above and below ground. The Squirrel Search Algorithm (SSA) (Jain et al. 2019 ) is a single-objective optimization problem-solving heuristic algorithm based on the feeding habits of wild squirrels. This algorithm simulates the search strategy of squirrels when searching for food, gradually approaching the optimal solution by continuously adjusting the search position and range. To achieve the goal of optimization, Aquila Optimizer (AO) (Abualigah et al. 2021 ) primarily mimics eagles' behavior while capturing prey. It has strong optimization ability and fast convergence speed. The inspiration for the Sea Horse Optimizer (SHO) (Zhao et al. 2023a , b ) comes from the hippocampus's movement, predation, and reproductive behavior in nature. The foraging and navigational habits of African vultures served as the basis for the African Vultures Optimization Algorithm (AVOA) (Abdollahzadeh et al. 2021 ). Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995 ) is a search algorithm developed based on group collaboration by simulating the foraging behavior of bird flocks. The Chameleon Swarm Algorithm (CSA) (Braik 2021 ) models the chameleons' dynamic foraging behavior in and around trees, deserts, and swamps. The Mayfly Algorithm (MA) (Zervoudakis and Tsafarakis 2020 ) is inspired by the mayflies' flight behavior and mating process. Wild horses' lives and behaviors inspired the Wild Horse Optimizer (WHO) (Naruei and Keynia 2022 ). Spider Wasp Optimizer (SWO) (Abdel-Basset et al. 2023b ) is proposed based on female spider wasps' hunting, nesting, and mating behavior. The Coati Optimization Algorithm (COA) (Dehghani et al. 2022 ) is inspired by coatis. The grey wolf's social structure and hunting strategies served as the basis for the Grey Wolf Optimization (GWO) algorithm (Mirjalili et al. 2014 ). The Marine Predators Algorithm (MPA) (Faramarzi et al. 2020a , b ) draws inspiration from the prey-hunting Brownian and Lévy movements of marine predators. The Ant Lion Optimizer (ALO) (Mirjalili 2015 ) is modeled after how ants navigate between their nests and food in their natural behavior. The humpback whales' bubble net hunting techniques and natural behavior served as the basis for the Whale Optimization Algorithm (WOA) (Mirjalili and Lewis 2016 ). The Dandelion Optimizer (DO) (Zhao et al. 2022 ) was proposed to simulate the process of dandelion seeds flying over long distances by wind. This algorithm considers two main factors, wind speed, and weather, and introduces Brownian motion and Levi flight to describe the seed's motion trajectory. Golden Jackal Optimization (GJO) (Chopra and Ansari 2022 ) is inspired by the cooperative hunting behavior of golden jackals in nature.

Algorithms abstracted from human behavior or social phenomena. These models have strong learning ability and adaptability and have demonstrated excellent performance in image recognition and natural language processing fields. The Volleyball Premier League (VPL) (Moghdani and Salimifard 2018 ) is inspired by the rivalry and interaction between various volleyball teams throughout the season. The social learning behavior of humans arranged in families in the social environment is the basis for the Social Evolution and Learning Optimization (SELO) (Kumar et al. 2018 ) algorithm. The inspiration for Social Group Optimization (SGO) (Satapathy and Naik 2016 ) comes from social group learning. The inspiration for the Cultural Revolution Algorithm (CEA) (Kuo and Lin 2013 ) comes from the process of social transformation. Hunter Prey Optimization (HPO) (Naruei et al. 2021 ) is inspired by the process of animal hunting. The inspiration for the IbI Logic Algorithm (Azizi et al.) (Mirrashid and Naderpour 2023 ) comes from thinking about brain logic.

Inspired by genetic evolution algorithms. These models can handle discrete and multi-objective optimization problems and have strong robustness and global search ability for complex issues. Gene Expression Programming (GEP) (Sharma 2015 ) aims to use gene expression programming to simulate the mathematical expression relationship between data points in a set of data points based on the laws of genetic inheritance, the idea of natural selection, survival of the fittest, and elimination of the best. The population is constantly evolving to find the most suitable chromosome. The processes of how species move from one island to another, how new species appear, and how species go extinct are the inspirations for Biogeography-Based Optimization (BBO) (Simon 2008 ) and Covariance Matrix Adaptation Evolution Strategy (CMA-ES) (Hansen and Kern 2004 ). The inspiration for Symbiotic Organisms Search (SOS) (Cheng and Prayogo 2014 ) comes from symbiotic phenomena in biology. The inspiration for Evolution Strategies (ES) (Beyer and Schwefel 2002 ) comes from biological evolution. Genetic programming is inspired by natural selection (GP) (Koza 1992 ).

Algorithms abstracted from physical properties or chemical reactions as inspiration. These models can jump between multiple local optimal solutions and find global optimal solutions by simulating the characteristics of physical phenomena and optimizing search strategies. The Kepler Optimization Algorithm (KOA) (Abdel-Basset et al. 2023a , b , c ) is a physics-based meta-heuristic algorithm that predicts the position and motion of planets at any given time by Kepler's laws of planetary motion. Energy Valley Optimizer (EVO) (Azizi et al. 2023 ) is a brand-new meta-heuristic algorithm that draws inspiration from physical theory's various particle decay modes and stability laws. Light Spectrum Optimizer (LSO) (Abdel-Basset et al. 2022 ) is a new physics-inspired meta-heuristic algorithm that generates meteorological phenomena of colored rainbow spectra inspired by the dispersion of light at different angles when passing through raindrops. Rime Optimization Algorithm (RIME) (Su et al. 2023 ), which constructs a soft time search strategy and a hard time puncture mechanism, simulates ice's soft time and hard time growth processes and achieves exploration and development behavior in optimization methods. Multi-verse Optimization (MVO) (Mirjalili et al. 2016 ) is inspired by the fact that the universe has an expansion rate, utilizing the principle that white holes have higher and black holes have a lower expansion rate. The particles in the universe search through the principle of transferring from white spots to black holes through wormholes. The control volume mass balance model, used to estimate dynamic and equilibrium states, is the primary source of inspiration for the Equilibrium Optimizer (EO) (Faramarzi et al. 2020a , b ).

1.2 Related work

In this section, we discussed some recent work.

Banaie-Dezfouli et al. ( 2023 ) introduce an improved binary GWO algorithm called the extreme value-based GWO (BE-GWO) algorithm. This algorithm proposes a new cosine transfer function (CTF) to convert continuous GWO into binary form. Then, it introduces an extreme value (Ex) search strategy to improve the efficiency of converting binary solutions. Nama et al. ( 2023 ) propose a new ensemble algorithm called e-mPSOBSA with the reformed Backtracking Search Algorithm (BSA) and PSO. Chakraborty et al. ( 2022 ) suggest an enhanced SOS algorithm called nwSOS to resolve higher dimensional optimization issues. Nama and Saha ( 2022 ) introduce an improved BSA (ImBSA) based on multi-group methods and modified control parameter settings to understand the collection of various mutation strategies. Nama ( 2021 ) proposes an improved form of SOS to establish an increasingly stable balance between discovery and activity cores. This technology uses three unique programs: adjusting benefit factors, changing parasitic stages, and searching based on random weights. To achieve the best DE efficiency, Nama and Saha ( 2020 ) proposed a new version of the DE algorithm to control parameters and mutation operators, making appropriate adjustments to time-consuming control parameters. Nama ( 2022 ) offers a new quasi-reflective slime mold (QRSMA) method that combines the SMA algorithm with a reflective learning mechanism (QRBL) to improve the performance of SMA. Nama, Sharma et al. ( 2022a , b ) proposed an improved BSA framework called gQR-BSA, which is based on quasi-reflection initialization, quantum Gaussian mutation, adaptive parameter execution, and quasi-reflection hopping to change the coordinate structure of BSA. This algorithm adopts adaptive parameter settings, Lagrange interpolation formulas, and a new local search strategy embedded in Levy flight search to enhance search capabilities and better balance exploration and development. Nadimi-Shahraki et al. ( 2023a , b ) wrote a review of Whale optimization algorithms, systematically explaining the theoretical basis, improvement, and mixing of WOA algorithms. Sharma et al. ( 2022a , b ) proposed a new variant of BOA, mLBOA, to improve its performance. Sahoo et al. ( 2023 ) propose an improved dynamic reverse learning-based MFO algorithm (m-DMFO) combined with an enhanced emotional reverse learning (DOL) strategy. Sharma et al. ( 2022a , b ) propose a hybrid sine cosine butterfly optimization algorithm (m-SCBOA), which combines the improved butterfly optimization algorithm with the sine cosine algorithm to achieve excellent exploratory and developmental search capabilities. Chakraborty et al. ( 2023 ) have proposed a hybrid slime mold algorithm (SMA) to address the issues above and accelerate the exploration of natural slime molds. Nadimi-Shahraki et al. ( 2023b ) have developed an enhanced moth flame optimization algorithm called MFO-SFR to solve global optimization problems. Zamani et al. ( 2021 ) propose a novel DE algorithm named Quantum-based Avian Navigation Optimizer Algorithm (QANA) inspired by the extraordinary precision navigation of migratory birds during long-distance aerial paths. In the QANA, the population is distributed by partitioning into multiple flocks to explore the search space effectively using proposed self-adaptive quantum orientation and quantum-based navigation consisting of two mutation strategies, DE/quantum/I and DE/quantum/II. Nama et al. ( 2022a , b ) proposes a new integrated technology called e-SOSBSA to completely change the degree of intensification and diversification, thereby striving to eliminate the shortcomings of (Wolpert and Macready 1997 )traditional SOS.

1.3 Motivation of the work

It should be noted that no algorithm can find comprehensive solutions for every problem. As the 'No Free Lunch' (NFL) theorem reasonably indicates, no meta-heuristic algorithm is superior in solving every optimization problem. In other words, a specific meta-heuristic algorithm may achieve excellent results on particular issues but may not perform as well on other types of problems. With the continuous progress of technology and the increasing complexity of problems, some traditional algorithms cannot effectively solve these problems. After reviewing relevant literature, we found that many algorithms have limitations, including insufficient search ability, difficulty in converging to the global optimal solution, etc. These shortcomings have had a certain impact on the performance of the algorithm. We have been prompted to propose an updated and more powerful algorithm to overcome these limitations of existing algorithms and seek more effective solutions. After careful consideration, we have introduced an intelligent optimization algorithm inspired by the black-winged kite. We chose black-winged kites as our source of inspiration because they exhibit high adaptability and intelligent behavior in attack and migration. This inspired us to develop an algorithm to better cope with complex problems. Therefore, the above reasons have become the main driving force behind our research.

1.4 Contribution and innovation to the work

The contribution and innovation of this article are as follows:

The proposed Black Winged Kite Algorithm (BKA) lies in its unique biological heuristic features, which not only capture the flight and predatory behavior of black winged kites in nature, but also deeply simulate their high adaptability to environmental changes and target positions. The imitation of this biological mechanism provides the algorithm with robust dynamic search capabilities, enabling it to effectively cope with changing optimization environments.

In the black winged kite algorithm, we first introduced the Cauchy mutation strategy, which is a probability distribution strategy that helps the algorithm jump out of local optima and increases the probability of discovering better solutions in the global search space. This strategy improves the performance of the algorithm in discovering global optimal solutions and provides new solutions for high-dimensional complex optimization problems.

We have integrated a leadership strategy that mimics the leadership role of leaders in the kite community, ensuring that the algorithm can effectively utilize the current best solution and guide the search direction. This method not only helps to enhance the efficiency of the algorithm in utilizing the current search area, but also effectively balances the dynamics between exploration and utilization, ensuring that potential competitive new areas are not overlooked in the pursuit of optimal solutions.

The remainder of this research is structured as follows: The second section introduces the Black-winged kite's attack strategy and migration behavior (Wu et al. 2023 ) and develops a mathematical model based on them. The third section analyzes 59 benchmark functions and the test results. Five real-world engineering cases are presented in the fourth section, and the outcomes are examined. This article is summarized, and prospects are suggested in the fifth section.

2 The black-winged kite algorithm (BKA)

In this section, a naturally inspired algorithm called the BKA is proposed.

2.1 Inspiration and behavior of black-winged kites

The black-winged kite is a small bird with a blue gray upper body and a white lower body. Their notable features include migration and predatory behavior (Ramli and Fauzi 2018 ). They feed on small mammals, reptiles, birds, and insects, possess strong hovering abilities, and can achieve extraordinary hunting success(Wu et al. 2023 ). Inspired by their hunting skills and migration habits, we established an algorithm model based on black-winged kites.

2.2 Mathematical model and algorithm

The development of the BKA algorithm as a simple and effective meta-heuristic optimization method is illustrated in this section. We modeled the migration and attack stages of the proposed BKA based on the Black-winged kite's attack strategy and migration behavior. In Fig. 2 , the pseudo-code of BKA is presented. This pseudocode clearly describes the execution process of the BKA algorithm. It provides steps and operations to solve specific problems and optimizes the results through iteration and adjustment.

figure 2

Pseudocode of BKA

figure 3

a Black-winged kite hovering in the air, b The Black-winged kite rushed towards its prey at great speed

2.2.1 Initialization phase

In BKA, creating a set of random solutions is the first step in initializing the population. The following matrix can be used to represent the location of every Black-winged kite (BK):

where pop is the number of potential solutions, dim is the size of the given problem's dimension, and BK ij is the j th dimension of the i th Black-winged kite. We are distributing the position of each Black-winged kite uniformly.

where i is an integer between 1 and pop, where BK lb and BK ub are the lower and upper bounds of i th Black-winged kites in the j th dimension, respectively, and the rand is a value chosen at random between [0, 1].

In the initialization process, BKA selects the individual with the best fitness value as the leader X L in the initial population, which is considered the optimal location of the Black-winged kites. Here is the mathematical representation of the initial leader X L using the minimum value as an example.

2.2.2 Attacking behavior

As a predator of small grassland mammals and insects, black-winged kites adjust their wings and tail angles according to wind speed during flight, hover quietly to observe prey, and then quickly dive and attack. This strategy includes different attack behaviors for global exploration and search. Figure 3 a shows a scene of a black-winged kite hovering in the air, spreading its wings and maintaining balance. Figure 3 b shows the scene of the black-winged kite rushing towards its prey at an extremely fast speed. Figure 4 a shows the attack state of the black-winged kite as it hovers in the air, while Fig. 4 b shows the state of the black-winged kite as it hovers in the air. The following is a mathematical model for the attack behavior of black-winged kites:

figure 4

Two attack strategies of Black-winged kites are a hovering in the air, waiting for attack, and b hovering in the air, searching for prey

The following is a definition of the characteristics of Eqs. ( 5 ) and ( 6 ):

y i, j t and y i, j t + 1 represent the position of the i th Black-winged kites in the j th dimension in the t and ( t  + 1) th iteration steps, respectively.

r is a random number that ranges from 0 to 1, and p is a constant value of 0.9.

T is the total number of iterations, and t is the number of iterations that have been completed so far.

2.2.3 Migration behavior

Bird migration is a complex behavior influenced by environmental factors such as climate and food supply (Flack, et al. 2022 ). Bird migration is to adapt to seasonal changes, and many birds migrate south in winter from the north to obtain better living conditions and resources (Lees and Gilroy 2021 ). Migration is usually led by leaders, and their navigation skills are crucial to the success of the team. We propose a hypothesis based on bird migration: if the fitness value of the current population is less than that of the random population, the leader will give up leadership and join the migratory population, indicating that it is not suitable to lead the population forward (Cheng, et al. 2022 ). On the contrary, if the fitness value of the current population is greater than that of the random population, it will guide the population until it reaches its destination. This strategy can dynamically select excellent leaders to ensure a successful migration. Figure 5 shows the changes in the leading bird in the migration process of black-winged kites. The following is a mathematical model for the migration behavior of black-winged kites:

figure 5

The strategic changes of Black-winged kites during migration Pseudocode of BKA

The attributes of Eqs. ( 7 ) and ( 8 ) are defined as follows:

L j t represents the leading scorer of the Black-winged kites in the j th dimension of the t th iteration so far.

F i represents the current position in the j th dimension obtained by any Black-winged kite in the t iteration.

F ri represents the fitness value of the random position in the j th dimension obtained from any Black-winged kites in the t iteration.

C (0,1) represents the Cauchy mutation (Jiang, et al. 2023 ). The definition is as follows:

A one-dimensional Cauchy distribution is a continuous probability distribution with two parameters. The following equation illustrates the probability density function of the one-dimensional Cauchy distribution:

When δ  = 1, μ  =  0, its probability density function will become the standard form. The following is the precise formula:

2.3 The balance and diversity analyses

Maintaining a good balance between global and local search is an important factor in optimizing algorithms to find the optimal solution, which involves exploring and developing the search space. In this process, it is necessary to balance the proportion of global search and local search to ensure that the algorithm does not prematurely mature and can find the best solution. To better balance these issues, this algorithm uses parameter p to control different attack behaviors. At the same time, the variable n set in this article will decrease nonlinearly with the increase of iteration times, which can control the algorithm to shift from a global search algorithm to a local search, enabling it to find the optimal solution faster and avoid falling into local optimal solution, to better solve practical problems.

Diversity is very important in intelligent optimization algorithms, as it helps to avoid the population falling into local optima and provides a wide search range, increasing the chances of the algorithm discovering global optima. Like most intelligent optimization algorithms, the individuals in the initial population of this article are randomly generated within a given range, which results in certain differences in the positions and eigenvalues of each individual, thus giving the individuals in the population a certain degree of diversity and better exploration of the solution space. Meanwhile, during the iteration process of the algorithm, the application of the Cauchy strategy and the reasonable setting of parameters improve the diversity of the algorithm, improve its global search ability, and avoid falling into local optima.

2.4 Computational complexity

We can assess the time and space resources needed for algorithms to handle large-scale problems using computational complexity, a crucial indicator of algorithm efficiency. To better understand the effectiveness and viability of the proposed BKA algorithm, we will conduct a thorough analysis of the time and spatial complexity of the algorithm in this section.

2.4.1 Time complexity

The BKA algorithm initializes a set of potential solutions during initialization, which will be used for further search and optimization. The initialization method selected, as well as the size of the problem, typically determine how time-consuming the initialization process is. The number of candidate solutions or the size of the problem, denoted by M , determines the computational complexity of the initialization procedure in this article, which is O  ( M ). This process involves generating initial solutions, determining parameter settings, and initializing other necessary operations. This initialization process needs to be executed once before starting the algorithm. Second, one of the crucial components of the BKA algorithm, which is used to assess the effectiveness and quality of potential solutions, is fitness evaluation. The issues considered and the particular evaluation method determine how complicated the fitness assessment process is. For specific problems, fitness assessment involves complex computational or simulation techniques with a time complexity of O  ( T  ×  M ) +  O  ( T  ×  M  ×  D ), where T is the maximum number of iterations and D is the specific problem's dimension. Finally, updating the Black-winged kite is a critical step in the BKA algorithm, which generates new candidate solutions based on the current key and neighborhood search. The neighborhood search difficulty and the update strategy employed determine the difficulty of updating Black-winged kites. Therefore, the runtime complexity of the BKA is O ( M  × ( T  +  T  ×  D  + 1)).

2.4.2 Space complexity

The spatial complexity of the BKA algorithm refers to the additional storage space required during algorithm operation. Let's analyze the spatial complexity of the BKA algorithm. The spatial complexity of the BKA algorithm is relatively low. The primary space consumption comes from storing candidate solutions and related intermediate results and temporary variables. Specifically, BKA algorithms typically only need to store the current best solution, candidate solutions, and some data structures related to the search and optimization process. In the most straightforward implementation, the spatial complexity of the BKA algorithm is approximately O  ( M ), where M represents the number of candidate solutions or the size of the problem. This is because the algorithm needs to allocate storage space for each candidate solution and update and compare it during iteration. In addition, additional storage space is needed to store other auxiliary variables and intermediate results. It should be noted that the BKA algorithm's spatial complexity can change depending on the particulars of the problem and its implementation. The spatial complexity may increase if more complex data structures or intermediate result storage are used in the algorithm.

3 Experimental results and discussion

This section conducts simulation studies and assesses the effectiveness of BKA in optimization. The experiments are conducted on MATLAB R2022b with a 3.20 GHz 64 bit Core i9 processor and 16 GB of main memory.

3.1 The benchmark set and compared algorithms

The ability of BKA to handle a variety of objective functions is tested in this article using 59 standard benchmark functions, including 18 benchmark functions, the CEC-2017 test set (Wu et al. 2016 ), and the CEC-2022 test set (Yazdani et al.  2021 ). The test results are compared with those of well-known algorithms like MVO, SCA, GWO, MPA, RIME, ALO, WOA, STOA, DO, GJO, PSO, AVOA, SHO, SCSO, SSA, AO, COA, etc. to assess the quality of the best solution offered by BKA. These algorithms' control parameters are all set to the values the algorithm proposer suggested. Three evaluation functions are also mentioned to analyze the algorithm's performance thoroughly: average (Avg), standard deviation (Std), and ranking.

(1) The definition of standard deviation is as follows:

(2) Ranking: ranking depends on the average fitness value of the algorithm. The algorithm is ranked higher when the average value is lower.

3.2 Sensitivity analysis

In this section, experiments and analysis are conducted based on the algorithm's internal parameters. The key internal parameters of the BKA algorithm are discussed and analyzed to determine the optimality and rationality of the key parameters of the proposed algorithm. In this section, when the attack mechanism takes effect, we will change the parameter p in Sect. 2.2.2 . This parameter is used to control the switching between two attack behaviors and is an important parameter that affects the overall accuracy and stability of the algorithm. Set the parameter p to 0.3, 0.5, and 0.7 for experiments and compare it with the original parameter p  = 0.9 to show the impact of parameter changes on BKA performance in the mechanism. The comparative experiment was conducted within a unified evaluation framework, with the same number of 30 populations and 30 independent runs. The experimental results are shown in Tables 1 and 2 .

From Table 1 , we can see that for the CEC-2017 test set, BKA achieved the best results among 21 functions at parameter p  = 0.9, achieved the same optimal results as p  = 0.7 on F23, and did not achieve the best results on only 7 functions. From Table 2 , we can see that for the CEC-2022 test set, BKA achieved the best results among 7 functions at parameter p  = 0.9, achieved the same optimal results as p  = 0.5 on the F3 function, achieved the same optimal results as p  = 0.7 on the F4 function, and achieved the same optimal results as p  = 0.5 and p  = 0.3 on the F12 function. Only two functions did not achieve the optimal results. Through a comprehensive analysis of Tables  1 and 2 , we believe that the BKA algorithm can achieve better results in processing optimization when the parameter p  = 0.9.

3.3 The results of the algorithm on different test sets

This section used several test sets to gauge how well the recently created meta-heuristic algorithm BKA handled global optimization issues.

3.3.1 Evaluation of 18 functions and qualitative analysis

This test set includes both unimodal and multimodal functions to thoroughly assess the performance of the BKA algorithm (Xie and Huang 2021 ). The unimodal function (F1–F9) in this test set is a function with a globally optimal solution used to verify the efficacy of the optimization algorithm. Multimodal functions (F10–F18) have many local extremum values used to assess the algorithm's exploratory power. Tables 3 and 4 provide detailed information on 18 test functions. The results of all algorithms were obtained using 30 search agents with 500 iterations and 10 independent runs.

Table 5 shows the results of BKA and the comparison algorithm on 18 test functions. The value of Avg determines the ranking in Table 5 ; the lower the value, the higher the ranking. In Figs. 6 and 7 , where the vertical axis denotes the fitness value and the horizontal axis the number of iterations, the convergence curves of BKA and other optimization algorithms at dimension 10 are contrasted. In unimodal functions, BKA exhibits an advantage over other F1, F3, and F4 algorithms, even surpassing other algorithms by tens of orders of magnitude. However, in F2, F5, and F9, the advantage of BKA is not as apparent as before. In F6, the RIME algorithm has a weak advantage over BKA; in F7 and F8, the WOA algorithm is slightly better than BKA. Although BKA did not achieve the optimal value on all unimodal functions, compared to the optimal algorithm, the difference between BKA and the optimal algorithm is minimal in those functions where BKA did not achieve the optimal solution. It should be emphasized that although BKA has significant advantages in some unimodal functions, its performance is not entirely dominant compared to other algorithms. This means that in specific problem domains and function types, other algorithms still have competitiveness and similar performance. The BKA algorithm achieved the theoretical optimal value of 0 on F10, F11, F13, F15, and F17 in multimodal functions. On F12 and F18, the BKA algorithm achieved results similar to those of other algorithms. While other algorithms are stuck in local optima for F14 and F16, the BKA algorithm still achieves excellent results. These findings show that the BKA algorithm performs well regarding global search and optimization when dealing with multimodal functions. In most multimodal functions, the BKA algorithm can accurately find the theoretical optimal value, demonstrating its powerful effect in global optimization. The BKA algorithm's results in functions F12 and F18 are comparable to those of other algorithms, but they still exhibit the BKA algorithm's effectiveness and robustness in handling complex problems. In contrast to other algorithms, the BKA algorithm can avoid hitting local optima and produce results close to the ideal outcome.

figure 6

Convergence analysis of the proposed BKA and competitor algorithms in unimodal functions in dimension 10

figure 7

Convergence analysis of the proposed BKA and competitor algorithms in multimodal functions in dimension 10

Figure 8 shows the search surface graph of the benchmark function, the historical search process of BKA, the average convergence curve of fitness values, and the average convergence curve. The first column displays the search space of each algorithm, and observing the search surface of the search space can provide a more precise and intuitive understanding of the characteristics of the function. The graph shows that F1, F2, and F8 have only one extreme value, while F12, F13, and F17 have multiple extreme values. The second column depicts the historical search process of BKA on a global scale, where the red dots represent the positions of the optimal individuals in each generation of BKA, and the blue dots represent the positions of ordinary individuals. Observing the images of the historical search process makes it possible to gain a more intuitive understanding of the distribution of BKA and the changes in individual positions during the iteration process. The intermediate fitness image of BKA represents the average target optimal values of all dimensions during each iteration process in the third column, which also shows the average trend of the population's evolution. The average fitness value of the BKA algorithm exhibits strong oscillations in the early iterations, which gradually weaken and tend to flatten out, as seen in the images. This reveals that the BKA algorithm has been fully explored in its early stages and extensively searched and optimized globally.

figure 8

Search space , search history, average fitness, and convergence curve of BKA algorithm

Meanwhile, in the later stages of the iteration, we can also observe significant short-term oscillations. This reflects the BKA algorithm's continuous attempts to jump out of the local optimal value in the later stage to find higher accuracy and better solutions. This short-term oscillation indicates that the BKA algorithm has a certain degree of convergence and continuously strives to improve the quality of the key in the later stages. Overall, the BKA algorithm exhibits a strategy of exploration before development during the optimization process. The algorithm uses large oscillation amplitudes in the early stages to identify potential optimization directions. In the later stage, the BKA algorithm focuses more on fine-tuning and optimization, constantly trying to jump out of the local optimal solution to converge to higher accuracy and better results. The fourth column displays an image of the average convergence curve, which shows the optimal solution obtained by the BKA algorithm throughout the entire iteration process. The multimodal function curve decreases gradually during convergence, while the unimodal function curve rapidly decreases as the number of iterations rises. The ability of the BKA algorithm to quickly exit the local extremum and gradually inch closer to the global optimal value during the optimization process is reflected in this trend.

3.3.2 Evaluation of the CEC-2017 suite test

The CEC-2017 suite is chosen as the testing project in this experiment to gauge BKA's effectiveness in resolving optimization issues. The CEC-2017 set contains four different kinds of benchmark functions. It should be noted that the instability of the F2 function may lead to unpredictable optimization results, resulting in uncertain and inconsistent results when evaluating algorithm performance. The decision-maker decides to eliminate the F2 function from the CEC-2017 test suite to guarantee the test set's validity and consistency. The search domain for all functions in this test suite is [− 100, 100], and each test function has ten dimensions. The simulation results of all algorithms are obtained using 30 search agents with 1000 iterations and 10 independent runs.

In calculating the CEC-2017 test set, the outcomes of our algorithm and the comparison algorithm are shown in Table 6 , with the best effect denoted in bold. From the data in Table 6 , it can be concluded that in the 29 test functions of CEC-2017, the BKA algorithm achieved 21 optimal results, accounting for 72.4%, surpassing the other eight algorithms. A typical statistical chart in the field of statistics is the box plot. Its resemblance to a box's shape led to its name. The box chart can calculate the degree of dispersion of univariate data and clearly and intuitively display the degree of dispersion and distribution interval while highlighting abnormal data values. The box's upper and lower boundaries correspond to the upper and lower quartiles of the data, respectively, and the box's median represents the middle point of the data. The shorter the length of the box, the more concentrated the data. The longer the box length, the more scattered the data, and the worse the stability. Figure 9 shows the box plots of the BKA algorithm and its comparison algorithm on F3, F8, F9, F10, F14, F15, F20, and F26. By observing the chart, we can draw some conclusions. Firstly, the box plot shows that BKA, GJO, PSO, AVOA, and SHO algorithms have almost no outliers, indicating their high stability. This means that on these benchmark functions, the performance of these algorithms is relatively consistent, without significant performance fluctuations or anomalies. Secondly, by observing the box length, we can see that the box length of the BKA algorithm is shorter and at a lower position. This means that the BKA algorithm has a slight difference in the solution set on these benchmark functions, which means its solution accuracy is relatively high. Firstly, the box plot shows that BKA, GJO, PSO, AVOA, and SHO algorithms have almost no outliers, indicating their high stability. This means that on these benchmark functions, the performance of these algorithms is relatively consistent, without significant performance fluctuations or anomalies. Secondly, by observing the box length, we can see that the box length of the BKA algorithm is shorter and at a lower position. As a result, the BKA algorithm's solution set difference for these benchmark functions is minimal, demonstrating a high level of solution accuracy.

figure 9

Boxplot of different algorithms on partial functions of CEC-2017 in dimension 10

The heat map is a graphical representation based on color coding, which represents the size of data through the strength, depth, and different colors of colors, allowing readers to have a more intuitive understanding of the correlations and trends between data. In Fig. 10 , the darker the color, the greater the error of the algorithm. The figure indicates that all algorithms perform poorly for functions F1, F2, F12, F13, F15, F18, F19, and F30, indicating that these functions are relatively difficult. In addition, the SSA algorithm has significant errors in most functions, proving that its performance is weak and cannot effectively solve these problems. Figure 11 shows the total running time of each algorithm on the CEC-2017 test set. Observing the graph, it can be seen that the running time of the BKA algorithm is at a relatively high level, with a difference of no more than 20 s compared to the PSO with the shortest running time. However, it is encouraging to note that in this test set, the performance of the BKA algorithm is significantly better than that of PSO and GJO. This indicates that although the BKA algorithm has a slightly longer runtime, it performs well.

figure 10

The error performance of different algorithms on the CEC-2017 test set

figure 11

The total running time of BKA and its comparison algorithm on CEC-2017

3.3.3 Evaluation of the CEC-2022 objective functions

This section further conducts experiments on the algorithm using the most recent CEC-2022 test set to highlight the uniqueness and superiority of the BKA algorithm. The CEC-2022 set includes four different kinds of benchmark functions. In the CEC-2022 test suite, the search domain for all functions is [− 100, 100]. The CEC-2022 test set provides an updated test set and evaluation metrics aimed at comprehensively evaluating the performance of optimization algorithms. We can better understand its performance in the latest environment by comparing the BKA algorithm with the previously mentioned algorithms. The simulation results of all algorithms are obtained using 30 search agents with 1000 iterations and 10 independent runs.

Table 7 shows that the BKA algorithm outperformed the other eight algorithms by achieving 8 out of the 12 test functions with the best results, or 66.7% of the total. Figure 12 shows that the results of BKA, GJO, PSO, and AVOA perform well on F1, but all have outliers, indicating that the performance of these algorithms is relatively high but not stable enough. Other algorithms perform very well in functions F2 and F6 except for SSA. In Fig. 13 , it can be seen that BKA performs stably on all functions, proving that BKA is robust. However, SSA performs poorly in various functions and cannot handle these challenging tasks. According to the results shown in Fig. 14 , we can observe the error situation of different algorithms. We can see large areas of high error, especially in the color distribution of heat maps for F1 and F6 functions. This indicates that these algorithms typically perform poorly on these specific functions. This indicates that these two functions pose substantial challenges for algorithms, and optimizing these functions is a relatively complex task for most algorithms. The graph shows that, aside from the SSA and COA algorithms, the performance of other algorithms is generally reasonable. They can achieve lower error levels when processing F1 and F6 functions, demonstrating relatively good performance. Figure 15 shows the total running time of each algorithm on the CEC-2022 test set. Observing the graph, it can be seen that the running time of the BKA algorithm is at a relatively high level, with a difference of no more than 10 s compared to the PSO with the shortest running time. This indicates that although the BKA algorithm has a slightly longer runtime, it performs well.

figure 12

Box plots of different algorithms on the CEC-2022 test set (F1–F6)

figure 13

Box plots of different algorithms on the CEC-2022 test set (F7–F12)

figure 14

The error performance of different algorithms on the CEC–2022 test set

figure 15

The total running time of BKA and its comparison algorithm on CEC–2022

In summary, the reasons why the BKA algorithm can achieve the best results are as follows: The BKA algorithm adopts the Cauchy distribution strategy and has a strong global search ability. Through the global search strategy, the BKA algorithm is highly likely to discover the global optimal solution. The BKA algorithm introduces a leader strategy. By selecting individuals with high fitness values as leaders, others learn and improve the solution through interaction with the leader.

3.4 Nonparametric statistical analysis

To comprehensively evaluate the performance of BKA, we chose to use the Wilcoxon sign rank test and Friedman test to test BKA and its comparison algorithm. Wilcoxon signed-rank test is a non-parametric test method used to compare two sets of related samples. Its main function is to determine whether there is a significant difference in the median between two related samples. This method can be used to test whether the difference in the median between two sets of related samples is significant. The Friedman test is a non-parametric test used to compare multiple sets of related samples. Its main function is to determine whether there is a significant difference in the median of multiple sets of related samples, which can be used to test whether there is a significant difference in the median of multiple sets of related samples.

Tables 8 and 9 list the results of Wilcoxon testing for different algorithms on different test sets, all of which are based on a 95% significance level (α = 0.05). In Tables 8 and 9 , the symbol " + " indicates that the reference algorithm performs better than the comparison algorithm, the symbol "−" indicates that the reference algorithm is not as good as the comparison algorithm. The symbol " = " indicates no difference in significance between the reference and comparison algorithms. By observing the last row in the table, we can conclude that the BKA algorithm has a smaller number of '-,' while there are more ' + ' and ' = '. This indicates that, in most cases, the performance of the BKA algorithm is not weaker than that of the comparison algorithm. Tables 10 and 11 list the Friedman test rankings and average rankings of different algorithms on different test sets. By observing Tables  10 and 11 , we can see that the BKA algorithm ranks first in most benchmark functions and first in average rankings. These statistical data demonstrate the BKA algorithm's excellent performance on a single benchmark function but, more importantly, by evaluating its overall performance, its practicality in multiple optimization problems can be more reliably evaluated.

3.5 Effectiveness analysis

The overall effectiveness (OE) of the BKA algorithm and other contender algorithms are computed by Eq. ( 12 ) and reported in Table 12 , where the parameter N is the total number of test functions, and L i is the number of test functions in which the i -th algorithm is a loser (Nadimi-Shahraki and Zamani 2022 ). From Table 12 , it can be seen that BKA demonstrated its effectiveness with 70.7% excellent results on the CEC-2017 and CEC-2022 test sets, far surpassing other comparative algorithms.

3.6 Limitation analysis

Although BKA has achieved good results in dealing with optimization problems, it cannot be ignored that this algorithm still has some shortcomings, which can be summarized as follows: this algorithm has not achieved optimal results in solving specific types of optimization problems and has shown insufficient stability in multiple runs. Specifically, the insufficient stability of the algorithm may be due to the uneven distribution of initial parameters, resulting in the search strategy exhibiting variation in multiple runs. In addition, when dealing with complex problems with high-dimensional search spaces, the algorithm may experience premature convergence or repeated convergence during the iteration process, reducing the consistency of the results. Meanwhile, although slightly superior in performance, BKA's running speed is relatively low, which may become a disadvantage in application scenarios that require fast iteration. To improve these limitations, it is recommended to further adjust the initial value distribution, optimize the exploration and utilization mechanism, and consider algorithm acceleration strategies in subsequent research in order to improve the stability and efficiency of the algorithm and better adapt to various complex optimization problems.

4 BKA for solving engineering problems

This section evaluates how well BKA performed in resolving five elaborate engineering design issues: the design of a tension/compression spring, a pressure vessel, a welded beam, a speed reducer, and a three-bar truss design issue. These well-known engineering problems contain numerous equality and inequality constraints, and the ability of BKA to optimize real-world and constrained problems is evaluated from the perspective of constraint processing. Here, the constrained issues are transformed into unconstrained problems using a straightforward method of the death penalty.

Solving constrained optimization problems is a crucial task in both optimization theory and applications. There are numerous methods for processing constraints, including operators, decoder functions, representations that preserve feasibility, repair algorithms, and penalty functions. Constrained optimization issues are typically solved using the penalty function method, a popular technique from optimization theory. The objective of the penalty function approach is to introduce a penalty function that transforms the constraint conditions into a component of the objective function, thereby changing the original constraint problem into an unconstrained one. Without considering constraints, the ideal answer to the original issue can be found by modifying the shape and parameters of the penalty function. This study resolves these practical engineering issues using the penalty function method.

4.1 Pressure vessel design

This engineering challenge aims to reduce the cost of producing cylindrical pressure vessels while meeting four constraints. This problem's resolution can be mathematically stated as follows:

Consider variable \(H = [h_{1} ,h_{2} ,h_{3} ,h_{4} ] = [T_{s} ,T_{h} ,R,L]\)

Variables range \(0 \le h_{j} \le 100,j = 1,2\) ,  \(10 \le h_{j} \le 200,j = 3,4\)

BKA has optimized this issue. BKA can obtain the optimal function value \(f(H) = 5887.364927\) with the structure variables \(H \, = \, (0.778433,0.384690,40.319619,200)\) . Table 13 displays the optimal values and variables that BKA and its comparison algorithm arrived at, demonstrating how well the algorithm resolved this issue. The algorithm performs better when the numerical value is lower. The results indicate that BKA has discovered a new structure that can achieve lower manufacturing costs than other structures.

4.2 Design issue with tension/compression springs

This engineering challenge aims to reduce the coil's weight while meeting three criteria. These limitations ensure the coil design adheres to certain engineering limitations and requirements. We can use the following mathematical expression to explain this issue:

Consider variable \(H = [h_{1} ,h_{2} ,h_{3} ] = [d,D,N]\)

Variables range  \(0.05 \le h_{1} \le 2,0.25 \le h_{2} \le 1.3,2 \le h_{3} \le 15\)  

Table 14 displays the optimal values and variables that BKA and its comparison algorithm arrived at, illustrating how well the algorithm resolved this issue. BKA can obtain the optimal function value \(f(H) = 0.01267027\) with the structure variables \(H \, = \, (0.051173,0.344426,12.047782)\) . The experiments and comparative analysis results demonstrate that the BKA algorithm can produce better solutions when tackling these issues. This discovery provides engineers and decision-makers with a reliable tool and method to improve the design, planning, and decision-making processes and achieve higher-quality engineering solutions.

4.3 Welded beam design

This engineering challenge aims to minimize the welded beam's weight while satisfying the four constraints. The welding thickness, rod connection length, rod height, and rod thickness are the four decision variables that we must optimize to describe this issue. For this engineering problem, we can define an objective function to represent the weight of the welded beam, namely:

Consider variable \(H = [h_{1} ,h_{2} ,h_{3} ,h_{4} ] = [h,l,t,b]\)

Minimize: \( (H) = 1.10471h_{2} h_{1}^{2} + \left( {14 + h_{2} } \right) \times 0.04811h_{3} h_{4} \)

Variables range \(P = 6,000lb,{\text{ }}L = 14in,E = 30e6psi,{\text{ }}G = 12e6psi,\) , \(\begin{gathered} \tau _{{\max }} = 13,000{\text{psi}},\sigma _{{\max }} = 30,000{\text{psi}},\delta _{{\max }} = 0.25{\text{in}},0.1 \le h_{1} \le 2,0.1 \le h_{2} \le 10, \hfill \\ 0.1 \le h_{3} \le 10,0.1 \le h_{4} \le 2. \hfill \\ \end{gathered}\)  

BKA can obtain the optimal function value \(f(H) = 1.724853\) with the structure variables \(H{\text{ }} = {\text{ }}(0.205730,3.470488,\begin{array}{*{20}l} {9.036622} \hfill \\ \end{array} ,0.205730)\) . The results in Table 15 indicate that BKA can bring better solutions to solving such problems. After analysis and comparison, the BKA algorithm can obtain better solutions under given constraints through flexible heuristic search methods and optimization mechanisms. It can adapt to different problem characteristics and solving requirements and has a high success rate and accuracy. This discovery gives engineers and decision-makers a reliable tool and method to improve the design and decision-making process and achieve higher-quality engineering solutions.

4.4 Speed reducer design problem

This issue aims to reduce the reducer device's weight while meeting 11 constraints. To describe this problem, we can use the following mathematical expression:

Consider variable \(H = [h_{1} ,h_{2} ,h_{3} ,h_{4} ,h_{5} ,h_{6} ,h_{7} ] = [b,m,p,l_{1} ,l_{2} ,d_{1} ,d_{2} ]\)

Variable range \(2.6 \le h_{1} \le 3.6,0.7 \le h_{2} \le 0.8,17 \le h_{3} \le 28,7.3 \le h_{4} \le 8.3\)

The optimal values and corresponding optimal variables that the BKA algorithm and its comparison algorithm arrived at are listed in Table 16 . These values offer a simple way to compare how well various algorithms perform when solving problems. BKA can obtain the optimal function value \(f(H) = 2994.47107\) with the structure variables \(H \, = \, (3.5,\begin{array}{*{20}l} {0.7} \hfill \\ \end{array} ,\begin{array}{*{20}l} {17} \hfill \\ \end{array} ,\begin{array}{*{20}l} {7.3} \hfill \\ \end{array} ,7.71532,\begin{array}{*{20}c} {3.350215} \\ \end{array} ,5.286654)\) . We can see from comparing the BKA algorithm's results to those of other algorithms that it solves problems more efficiently and produces better optimal values. This suggests that the BKA algorithm does a better job locating the optimal solution and may be closer to the problem's overall optimal solution. These optimal variables serve as crucial guides and references for a deeper comprehension of the problem's solution space and the viability of obtaining optimization results.

4.5 Three-bar truss design problem

This problem aims to reduce the member structure's weight while maintaining a constant total load. To achieve this goal, we need to consider three constraint conditions: the stress, buckling, and deflection constraints of each steel bar. Firstly, the stress constraint of each steel bar is to ensure that under the design working load, the stress borne by the steel bars in the member will not exceed the limit of their bearing capacity. This is to ensure the safety and reliability of the structure. The limitation of steel bar stress is determined by calculating the strength of the material and the force borne by the steel bar. Secondly, buckling constraint ensures that the member will not experience buckling under stress. Buckling refers to the instability phenomenon of a member under pressure, which may lead to structural failure. To avoid buckling, we need to limit the members' length, cross-sectional shape, and material selection to ensure that the structure can withstand the design load. Finally, deflection constraints ensure the member has sufficient stiffness and stability under stress. Deflection refers to the bending and deformation of a member under external forces. To control deflection, we need to limit the rod's geometric shape, the material's stiffness, and the design conditions' requirements. By simultaneously satisfying these three constraints, engineers can achieve maximum weight reduction in the member structure while maintaining the total load unchanged. This optimization design can reasonably utilize materials and reduce engineering costs while ensuring structural safety and performance. The following is the mathematical expression:

Table 17 , which compares the BKA algorithm to other algorithms, shows the optimal values and corresponding optimal variables. This table offers comparative analysis information that will allow us to assess how well various algorithms perform when solving problems. BKA can obtain the optimal function value \(f(H) = \begin{array}{*{20}c} {263.895843} \\ \end{array}\) with the structure variables \(H \, = (\begin{array}{*{20}c} {0.788675} \\ \end{array} ,\begin{array}{*{20}c} {0.408248} \\ \end{array} )\) . By analyzing the data, it can be deduced that the BKA algorithm offers a superior solution to these engineering problems.

4.6 Analysis of the results of engineering design problems

By observing the results of the five different types of constrained engineering design problems mentioned above, BKA achieved the optimal results. Below is an analysis of the reasons why BKA achieved optimal results in constraint design problems:

Advantages of swarm intelligence: The BKA algorithm is based on swarm intelligence, which enables interaction and information exchange between individuals in a group. The swarm intelligence algorithm can search for the optimal solution through individual cooperation and collaboration and has strong robustness and global search ability. Therefore, individuals in the BKA algorithm can better explore the solution space and find optimal results through the collaborative effect of swarm intelligence.

Parameter optimization and adjustment: The BKA algorithm includes some parameters, such as Cauchy distribution's control parameters and individual leaders' selection strategy. The BKA algorithm can better adapt to different engineering examples by optimizing and adjusting reasonable parameters. Reasonably setting parameters can improve the performance and effectiveness of the algorithm in specific problems, thus achieving optimal results.

The BKA algorithm adopts the Cauchy distribution strategy, which gives the algorithm a strong global search ability. The Cauchy distribution has a relatively wide tail, which allows for a wider search of the solution space and avoids falling into local optima. Therefore, BKA can traverse more solution spaces in different engineering examples and has a greater probability of finding the global optimal solution.

The BKA algorithm introduces a leader strategy to guide the algorithm's entire optimization process. By selecting individuals with high fitness values as leaders, other individuals learn and improve solutions through interaction with the leader. Leaders usually have relatively good solutions; through their guidance, the entire group can evolve toward a more optimal solution. Therefore, BKA can accelerate convergence and achieve optimal results through leader strategy in different engineering examples.

5 Conclusion and future works

This article presents the Black Kite Algorithm (BKA), a new swarm intelligence optimization algorithm inspired by the attack and migration behaviors of Black-winged kites. The algorithm mimics the Black-winged kites' high predatory skills and integrates a migratory strategy to enhance search capabilities, striking a balance between local and global optima. The study's main contents are:

Evaluate the performance of BKA using the CEC-2017 test set, CEC-2022 test set, and 18 complex functions, demonstrating superior results across various characteristics and complexities.

Statistical validation using the Friedman and Wilson sign rank tests, with BKA securing first place, confirming its effectiveness and scientific reliability.

Practical application of BKA in five engineering cases involving challenging conditions and constrained search spaces, where it shows significant superiority by quickly converging to high-quality solutions and exhibiting excellent performance.

In future research, BKA can be integrated with other well-known strategies, such as adversarial learning mechanisms (Lian et al. 2023 ) and chaotic mapping (Liu et al. 2023 ), to further enhance the optimization performance of the algorithm. BKA can also be used to optimize various engineering problems in the future, such as multi-disc clutch brake design problems (Yu et al. 2020 ), step cone pulley problems (Nematollahi et al. 2021 ), etc.

Abdel-Basset M, Mohamed R, Sallam KM, Chakrabortty RK (2022) Light spectrum optimizer: a novel physics-inspired metaheuristic optimization algorithm. Mathematics 10:3466

Article   Google Scholar  

Abdel-Basset M, Mohamed R, Azeem SAA, Jameel M, Abouhawwash M (2023a) Kepler optimization algorithm: a new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl-Based Syst 268:110454

Abdel-Basset M, Mohamed R, Jameel M, Abouhawwash M (2023b) Spider wasp optimizer: a novel meta-heuristic optimization algorithm. Artif Intell Rev 10:11675–11738

Abdel-Basset M, Mohamed R, Zidan M, Jameel M, Abouhawwash M (2023c) Mantis search algorithm: a novel bio-inspired algorithm for global optimization and engineering design problems. Comput Methods Appl Mech Eng 415:116200

Article   MathSciNet   Google Scholar  

Abdollahzadeh B, Gharehchopogh FS, Mirjalili S (2021) African vultures optimization algorithm: a new nature-inspired metaheuristic algorithm for global optimization problems. Comput Ind Eng 158:107408

Abualigah L, Yousri D, Abd Elaziz M, Ewees AA, Al-qaness MAA, Gandomi AH (2021) Aquila optimizer: a novel meta-heuristic optimization algorithm. Comput Ind Eng 157:107250

Al-Masri E, Souri A, Mohamed H, Yang W, Olmsted J, Kotevska O (2023) Energy-efficient cooperative resource allocation and task scheduling for internet of things environments. Int Things 23:100832

Atban F, Ekinci E, Garip Z (2023) Traditional machine learning algorithms for breast cancer image classification with optimized deep features. Biomed Signal Process Control 81:104534

Azizi M, Aickelin U, Khorshidi A, H, & Baghalzadeh Shishehgarkhaneh, M, (2023) Energy valley optimizer: a novel metaheuristic algorithm for global and engineering optimization. Sci Rep 13:226

Banaie-Dezfouli M, Nadimi-Shahraki MH, Beheshti Z (2023) BE-GWO: binary extremum-based grey wolf optimizer for discrete optimization problems. Appl Soft Comput 146:110583

Berger L, Bosetti V (2020) Characterizing ambiguity attitudes using model uncertainty. J Econ Behav Organ 180:621–637

Beyer H-G, Schwefel H-P (2002) Evolution strategies—a comprehensive introduction. Nat Comput 1:3–52

Bingi J, Warrier AR, Cherianath V (2023) Dielectric and plasmonic materials as random light scattering media. In: Haseeb ASMA (ed) Encyclopedia of materials: electronics. Academic Press, Oxford, pp 109–124

Chapter   Google Scholar  

Boulkroune A, Haddad M, Li H (2023) Adaptive fuzzy control design for nonlinear systems with actuation and state constraints: an approach with no feasibility condition. ISA Trans 142:1–11

Braik MS (2021) Chameleon swarm algorithm: a bio-inspired optimizer for solving engineering design problems. Expert Syst Appl 174:114685

Chakraborty S, Nama S, Saha AK (2022) An improved symbiotic organisms search algorithm for higher dimensional optimization problems. Knowl-Based Syst 236:107779

Chakraborty P, Nama S, Saha AK (2023) A hybrid slime mould algorithm for global optimization. Multimed Tools Appl 82:22441–22467

Chen Y, Dang B, Wang C, Wang Y, Yang Y, Liu M, Bi H, Sun D, Li Y, Li J, Shen X, Sun Q (2023) Intelligent designs from nature: biomimetic applications in wood technology. Prog Mater Sci 139:101164

Cheng M-Y, Prayogo D (2014) Symbiotic organisms search: a new metaheuristic optimization algorithm. Comput Struct 139:98–112

Cheng Y, Wen Z, He X, Dong Z, Zhangshang M, Li D, Wang Y, Jiang Y, Wu Y (2022) Ecological traits affect the seasonal migration patterns of breeding birds along a subtropical altitudinal gradient. Avian Research 13:100066

Chopra N, Ansari MM (2022) Golden jackal optimization: a novel nature-inspired optimizer for engineering applications. Expert Syst Appl 198:116924

Dehghani M, Montazeri Z, Trojovská E, Trojovský P (2022) Coati optimization algorithm: a new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl Based Syst 259:110011

Fan J, Zhou X (2023) Optimization of a hybrid solar/wind/storage system with bio-generator for a household by emerging metaheuristic optimization algorithm. J Energy Storage 73:108967

Faramarzi A, Heidarinejad M, Mirjalili S, Gandomi AH (2020a) Marine predators algorithm: a nature-inspired metaheuristic. Expert Syst Appl 152:113377

Faramarzi A, Heidarinejad M, Stephens B, Mirjalili S (2020b) Equilibrium optimizer: a novel optimization algorithm. Knowl-Based Syst 191:105190

Feng R, Shen C, Guo Y (2024) Digital finance and labor demand of manufacturing enterprises: theoretical mechanism and heterogeneity analysis. Int Rev Econ Financ 89:17–32

Flack A, Aikens EO, Kölzsch A, Nourani E, Snell KRS, Fiedler W, Linek N, Bauer H-G, Thorup K, Partecke J, Wikelski M, Williams HJ (2022) New frontiers in bird migration research. Curr Biol 32:R1187–R1199

Hansen N, Kern S (2004) Evaluating the CMA evolution strategy on multimodal test functions. In: Yao X, Burke EK, Lozano JA, Smith J, Merelo-Guervós JJ, Bullinaria JA, Rowe JE, Tiňo P, Kabán A, Schwefel H-P (eds) Parallel problem solving from nature—PPSN VIII. Springer, Berlin, pp 282–291

Hu G, Chen L, Wei G (2023) Enhanced golden jackal optimizer-based shape optimization of complex CSGC-Ball surfaces. Artif Intell Rev 56:2407–2475

Inceyol Y, Cay T (2022) Comparison of traditional method and genetic algorithm optimization in the land reallocation stage of land consolidation. Land Use Policy 115:105989

Jain M, Singh V, Rani A (2019) A novel nature-inspired algorithm for optimization: squirrel search algorithm. Swarm Evol Comput 44:148–175

Jiang M-r, Feng X-f, Wang C-p, Fan X-l, Zhang H (2023) Robust color image watermarking algorithm based on synchronization correction with multi-layer perceptron and Cauchy distribution model. Appl Soft Comput 140:110271

Kennedy J, Eberhart R (1995) Particle swarm optimization[C]. Proc of the IEEE Int Conf Neural Netw Piscataway IEEE Serv Center 12:1941–1948

Google Scholar  

Koza JR (1992) Genetic programming: on the programming of computers by means of natural selection. MIT Press, Cambridge

Kumar M, Kulkarni AJ, Satapathy SC (2018) Socio evolution & learning optimization algorithm: a socio-inspired optimization methodology. Futur Gener Comput Syst 81:252–272

Kumar P, Govindaraj V, Erturk VS, Nisar KS, Inc M (2023) Fractional mathematical modeling of the stuxnet virus along with an optimal control problem. Ain Shams Eng J 14:102004

Kuo HC, Lin CH (2013) Cultural evolution algorithm for global optimizations and its applications. J Appl Res Technol 11:510–522

Lees AC, Gilroy JJ (2021) Bird migration: when vagrants become pioneers. Curr Biol 31:R1568–R1570

Lian B, Xue W, Xie Y, Lewis FL, Davoudi A (2023) Off-policy inverse Q-learning for discrete-time antagonistic unknown systems. Automatica 155:111171

Liu L, Xu X (2023) Self-attention mechanism at the token level: gradient analysis and algorithm optimization. Knowl-Based Syst 277:110784

Liu R, Liu H, Zhao M (2023) Reveal the correlation between randomness and Lyapunov exponent of n-dimensional non-degenerate hyper chaotic map. Integration 93:102071

Melman A, Evsutin O (2023) Comparative study of metaheuristic optimization algorithms for image steganography based on discrete Fourier transform domain. Appl Soft Comput 132:109847

Mirjalili S (2015) The ant lion optimizer. Adv Eng Softw 83:80–98

Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67

Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61

Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput Appl 27:495–513

Mirrashid M, Naderpour H (2023) Incomprehensible but intelligible-in-time logics: theory and optimization algorithm. Knowl-Based Syst 264:110305

Moghdani R, Salimifard K (2018) Volleyball premier league algorithm. Appl Soft Comput 64:161–185

Nadimi-Shahraki MH, Zamani H (2022) DMDE: diversity-maintained multi-trial vector differential evolution algorithm for non-decomposition large-scale global optimization. Expert Syst Appl 198:116895

Nadimi-Shahraki MH, Zamani H, Asghari Varzaneh Z, Mirjalili S (2023a) A systematic review of the whale optimization algorithm: theoretical foundation, improvements, and hybridizations. Arch Comput Methods Eng 30:4113–4159

Nadimi-Shahraki MH, Zamani H, Fatahi A, Mirjalili S (2023b) MFO-SFR: an enhanced moth-flame optimization algorithm using an effective stagnation finding and replacing strategy. Mathematics 11:862

Nama S (2021) A modification of I-SOS: performance analysis to large scale functions. Appl Intell 51:7881–7902

Nama S (2022) A novel improved SMA with quasi reflection operator: performance analysis, application to the image segmentation problem of Covid-19 chest X-ray images. Appl Soft Comput 118:108483

Nama S, Saha AK (2020) A new parameter setting-based modified differential evolution for function optimization. Int J Model Simul Sci Comput 11:2050029

Nama S, Saha AK (2022) A bio-inspired multi-population-based adaptive backtracking search algorithm. Cogn Comput 14:900–925

Nama S, Saha AK, Sharma S (2022a) Performance up-gradation of symbiotic organisms search by backtracking search algorithm. J Ambient Intell Humaniz Comput 13:5505–5546

Nama S, Sharma S, Saha AK, Gandomi AH (2022b) A quantum mutation-based backtracking search algorithm. Artif Intell Rev 55:3019–3073

Nama S, Saha AK, Chakraborty S, Gandomi AH, Abualigah L (2023) Boosting particle swarm optimization by backtracking search algorithm for optimization problems. Swarm Evol Comput 79:101304

Naruei I, Keynia F (2022) Wild horse optimizer: a new meta-heuristic algorithm for solving engineering optimization problems. Eng Comput 38:3025–3056

Naruei I, Keynia F, Molahosseini AS (2021) Hunter–prey optimization: algorithm and applications. Soft Comput 26:1279–1314

Nematollahi E, Zare S, Maleki-Moghaddam M, Ghasemi A, Ghorbani F, Banisi S (2021) DEM-based design of feed chute to improve performance of cone crushers. Miner Eng 168:106927

Ramli R, Fauzi A (2018) Nesting biology of black-shouldered kite ( Elanus caeruleus ) in oil palm landscape in Carey Island, Peninsular Malaysia. Saudi J Biol Sci 25:513–519

Sahoo SK, Saha AK, Nama S, Masdari M (2023) An improved moth flame optimization algorithm based on modified dynamic opposite learning strategy. Artif Intell Rev 56:2811–2869

Satapathy S, Naik A (2016) Social group optimization (SGO): a new population evolutionary optimization technique. Complex Intell Syst 2:173–203

Seyyedabbasi A, Kiani F (2023) Sand cat swarm optimization: a nature-inspired algorithm to solve global optimization problems. Eng Comput 39:2627–2651

Sharma S, Chakraborty S, Saha AK, Nama S, Sahoo SK (2022a) mLBOA: a modified butterfly optimization algorithm with lagrange interpolation for global optimization. J Bionic Eng 19:1161–1176

Sharma S, Saha AK, Roy S, Mirjalili S, Nama S (2022b) A mixed sine cosine butterfly optimization algorithm for global optimization and its application. Clust Comput 25:4573–4600

Sharma, A (2015). Gene Expression Programming:-A New Adaptive Algorithm for Solving Problems, arXiv preprint cs/0102027

Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12:702–713

Su H, Zhao D, Heidari AA, Liu L, Zhang X, Mafarja M, Chen H (2023) RIME: a physics-based optimization. Neurocomputing 532:183–214

Wan M, Ye C, Peng D (2023) Multi-period dynamic multi-objective emergency material distribution model under uncertain demand. Eng Appl Artif Intell 117:105530

Wang W-c, Xu L, Chau K-w, Xu D-m (2020) Yin-Yang firefly algorithm based on dimensionally Cauchy mutation. Expert Syst Appl 150:113216

Wang W-c, Xu L, Chau K-w, Zhao Y, Xu D-m (2022) An orthogonal opposition-based-learning Yin–Yang-pair optimization algorithm for engineering optimization. Eng Comput 38:1149–1183

Wang L, Gao K, Lin Z, Huang W, Suganthan PN (2023a) Problem feature based meta-heuristics with Q-learning for solving urban traffic light scheduling problems. Appl Soft Comput 147:110714

Wang W-c, Xu L, Chau K-w, Liu C-j, Ma Q, Xu D-m (2023b) Cε-LDE: a lightweight variant of differential evolution algorithm with combined ε constrained method and Lévy flight for constrained optimization problems. Expert Syst Appl 211:118644

Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1:67–82

Wu G, Mallipeddi R, Suganthan P (2016) Problem definitions and evaluation criteria for the CEC 2017 competition and special session on constrained single objective real-parameter optimization. South Korea and Nanyang Technological University, Singapore

Wu C-F, Lai J-H, Chen S-H, Trac LVT (2023) Key factors promoting the niche establishment of black-winged kite Elanus caeruleus in farmland ecosystems. Ecol Ind 149:110162

Xie W, Huang P (2021) Extreme estimation of wind pressure with unimodal and bimodal probability density function characteristics: a maximum entropy model based on fractional moments. J Wind Eng Ind Aerodyn 214:104663

Xu W, Zhao H, Lv S (2023a) Robust multitask diffusion normalized M-estimate subband adaptive filter algorithm over adaptive networks. J Franklin Inst 360:11197–11219

Xu Y, Du R, Pei J (2023b) The investment risk evaluation for onshore and offshore wind power based on system dynamics method. Sustain Energy Technol Assess 58:103328

Yazdani, D, Branke, J, Omidvar, MN, Li, X, Li, C, Mavrovouniotis, M, Nguyen, T, & Yao, X (2021). IEEE CEC 2022 competition on dynamic optimization problems generated by generalized moving peaks benchmark, arXiv preprint arXiv:2106.06174 

Yu L, Ma B, Chen M, Li H, Liu J (2020) Investigation on the thermodynamic characteristics of the deformed separate plate in a multi-disc clutch. Eng Fail Anal 110:104385

Zaman SI, Khan S, Zaman SAA, Khan SA (2023) A grey decision-making trial and evaluation laboratory model for digital warehouse management in supply chain networks. Dec Anal J 8:100293

Zamani H, Nadimi-Shahraki MH, Gandomi AH (2021) QANA: quantum-based avian navigation optimizer algorithm. Eng Appl Artif Intell 104:104314

Zamani H, Nadimi-Shahraki MH, Gandomi AH (2022) Starling murmuration optimizer: a novel bio-inspired algorithm for global and engineering optimization. Comput Methods Appl Mech Eng 392:114616

Zervoudakis K, Tsafarakis S (2020) A mayfly optimization algorithm. Comput Ind Eng 145:106559

Zhao S, Zhang T, Ma S, Chen M (2022) Dandelion optimizer: a nature-inspired metaheuristic algorithm for engineering applications. Eng Appl Artif Intell 114:105075

Zhao H, Ning X, Liu X, Wang C, Liu J (2023a) What makes evolutionary multi-task optimization better: a comprehensive survey. Appl Soft Comput 145:110545

Zhao S, Zhang T, Ma S, Wang M (2023b) Sea-horse optimizer: a novel nature-inspired meta-heuristic for global optimization problems. Appl Intell 53:11833–11860

Download references

Acknowledgements

The authors are grateful for supporting the special project for collaborative science and technology innovation in 2021 (No: 202121206) and Henan Province University Scientific and Technological Innovation Team (No: 18IRTSTHN009).

Author information

Authors and affiliations.

College of Water Resources, North China University of Water Resources and Electric Power, Zhengzhou, 450046, China

Jun Wang, Wen-chuan Wang, Xiao-xue Hu, Lin Qiu & Hong-fei Zang

You can also search for this author in PubMed   Google Scholar

Contributions

Jun Wang: conceptualization, methodology, data curation, writing—original draft. Wen-chuan Wang: conceptualization, methodology, writing—original draft, formal analysis, data curation. Xiao-xue Hu: writing—original draft, preparing figures. Lin Qiu: investigation, formal analysis. Hong-fei Zang: writing—original draft, formal analysis.

Corresponding author

Correspondence to Wen-chuan Wang .

Ethics declarations

Conflict of interest.

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Consent to participate

Informed consent was obtained from all participants included in the study.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Wang, J., Wang, Wc., Hu, Xx. et al. Black-winged kite algorithm: a nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif Intell Rev 57 , 98 (2024). https://doi.org/10.1007/s10462-024-10723-4

Download citation

Accepted : 04 February 2024

Published : 23 March 2024

DOI : https://doi.org/10.1007/s10462-024-10723-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Nature-inspired optimization
  • Black-winged kite algorithm
  • Meta-heuristic
  • Constrained proble
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. AI Problem Solving

    problem solving approaches in artificial intelligence

  2. Problem Solving Methods In (AI ) # Artificial Intelligence Lecture 11

    problem solving approaches in artificial intelligence

  3. Problem Solving Agents in Artificial Intelligence

    problem solving approaches in artificial intelligence

  4. Approaches to AI to solve complex problems even without data

    problem solving approaches in artificial intelligence

  5. What Is Artificial Intelligence AI? And How does it work

    problem solving approaches in artificial intelligence

  6. DAY 7

    problem solving approaches in artificial intelligence

VIDEO

  1. What is Computational Intelligence in AI? Meaning, Definition, Explanation

  2. Artificial Intelligence : Search Methods For Problem Solving

  3. Identify the Different Stages of an Artificial Intelligence Project

  4. Constrain Propagation Algorithm

  5. A* Algorithm part I : Introduction of A* Algorithm

  6. Artificial intelligence || lecture 7

COMMENTS

  1. Problem Solving in Artificial Intelligence

    There are basically three types of problem in artificial intelligence: 1. Ignorable: In which solution steps can be ignored. 2. Recoverable: In which solution steps can be undone. 3. Irrecoverable: Solution steps cannot be undo. Steps problem-solving in AI: The problem of AI is directly associated with the nature of humans and their activities.

  2. Optimizing AI Algorithms with Problem Solving Skills

    When embarking on the journey of Artificial Intelligence (AI) algorithm design and optimization, you're essentially signing up for a series of complex problem-solving tasks. AI, which refers to ...

  3. AI accelerates problem-solving in complex scenarios

    In the future, Wu and her collaborators want to apply this approach to even more complex MILP problems, where gathering labeled data to train the model could be especially challenging. Perhaps they can train the model on a smaller dataset and then tweak it to tackle a much larger optimization problem, she says.

  4. AI and the Art of Problem-Solving: From Intuition to Algorithms

    In the complex world of Artificial Intelligence (AI), problem-solving is a key aspect. AI's ability to solve difficult problems, sometimes even better than humans, is not just a technological accomplishment but also gives us a glimpse into the future of computing and cognitive science. ... From intuitive problem-solving approaches that mimic ...

  5. Combining Human and Artificial Intelligence: Hybrid Problem-Solving in

    Organizations increasingly use artificial intelligence (AI) to solve previously unexplored problems. While routine tasks can be automated, the intricate nature of exploratory tasks, such as solving new problems, demands a hybrid approach that integrates human intelligence with AI. We argue that the outcomes of this human-AI collaboration are contingent on the processes employed to combine ...

  6. In the age of AI, it's problem-finding that matters

    Artificial Intelligence. Forget problem-solving. In the age of AI, it's problem-finding that counts. Sep 7, 2023. In the age of AI, the most successful people will be those who can identify the problems that AI is best placed to solve. Image: Shutterstock/Baranq.

  7. Intelligent problem-solving as integrated hierarchical ...

    Here we propose steps to integrate biologically inspired hierarchical mechanisms to enable advanced problem-solving skills in artificial agents. ... artificial intelligence 77, but approaches that ...

  8. Opportunities of artificial intelligence for supporting complex problem

    The research objective of this paper is to advance knowledge about the role of artificial intelligence (AI) in complex problem-solving. A problem is complex due to the large number of highly inter-connected variables affecting the problem state. ... AI has the potential to transform the way that learners and educators approach complex problems ...

  9. 'Solving for X?' Towards a problem-finding ...

    As such, in terms of societal impact and governance stakes, artificial intelligence can be functionally modelled as a strategic 'General-Purpose Technology' along the lines of steampower, electricity, or computing (Leung, ... Problem-solving approaches might gain traction with regard to the clear, discrete clusters of pre-defined problems ...

  10. Artificial Intelligence Technology and Social Problem Solving

    This is why we need convergent-scientific approaches for social problem solving. Convergent approaches offer the new possibility of building an informatics platform that can interpret, predict and solve various social problems through the combination of social science and data science. ... A world in which artificial intelligence actually makes ...

  11. PDF Problem Solving and Search

    6.825 Techniques in Artificial Intelligence. Problem Solving and Search. Problem Solving • Agent knows world dynamics • World state is finite, small enough to enumerate • World is deterministic • Utility for a sequence of states is a sum over path. The utility for sequences of states is a sum over the path of the utilities of the ...

  12. Artificial Intelligence Problems and Their Solutions

    ISBN: 9781683922544. This book lends insight into solving some well-known AI problems using the most efficient methods by humans and computers. The book discusses the importance of developing critical-thinking methods and skills, and develops a consistent approach toward each problem: 1) a precise description of a well-known AI problem coupled ...

  13. Problem Solving Techniques in AI

    Those agents employ artificial intelligence can tackle issues utilising methods like B-tree and heuristic algorithms. Approaches for Resolving Problems. The effective approaches of artificial intelligence make it useful for resolving complicated issues. All fundamental problem-solving methods used throughout AI were listed below.

  14. An Introduction to Problem-Solving using Search Algorithms for Beginners

    Problem Solving Techniques. In artificial intelligence, problems can be solved by using searching algorithms, evolutionary computations, knowledge representations, etc. In this article, I am going to discuss the various searching techniques that are used to solve a problem. In general, searching is referred to as finding information one needs.

  15. How Leaders Are Using AI As A Problem-Solving Tool

    Consequently, AI can be a valuable problem-solving tool for leaders across the private and public sectors, primarily through three methods. 1) Automation. One of AI's most beneficial ways to ...

  16. Exploring Problem Reduction in AI Techniques and Applications

    In the field of artificial intelligence, problem reduction is a crucial concept that plays a fundamental role in problem-solving. It involves breaking down a complex problem into smaller, more manageable subproblems to find a solution. This approach enables artificial intelligence systems to handle complex tasks more efficiently and effectively.

  17. Boost Problem-Solving in AI for Career Advancement

    When you're tackling Artificial Intelligence (AI) projects, it's not uncommon to hit a snag in problem-solving. AI involves complex algorithms, data analysis, and machine learning—a subset of AI ...

  18. Artificial intelligence

    This is one of the hardest problems confronting AI. Problem solving. Problem solving, particularly in artificial intelligence, may be characterized as a systematic search through a range of possible actions in order to reach some predefined goal or solution. Problem-solving methods divide into special purpose and general purpose.

  19. Artificial Intelligence: A Modern Approach, 4th US ed

    I Artificial Intelligence. 1 Introduction ... 1. 2 Intelligent Agents ... 36. II Problem-solving. 3 Solving Problems by Searching ... 63. 4 Search in Complex Environments ... 110. 5 Adversarial Search and Games ... 146. 6 Constraint Satisfaction Problems ... 180. III Knowledge, reasoning, and planning.

  20. Enhance IT Problem-Solving Skills with AI

    To begin enhancing your problem-solving skills with AI, you need to understand the basics of AI technology. Artificial Intelligence is the simulation of human intelligence processes by machines ...

  21. Problem Decomposition in Artificial Intelligence: Techniques and

    Problem decomposition is a fundamental concept in artificial intelligence, where a complex problem is broken down into smaller, more manageable sub-problems to facilitate problem solving. While various techniques exist for problem decomposition, hybrid approaches that combine multiple methods have proven to be highly effective in addressing ...

  22. (PDF) Autonomous Intelligence in Problem-Solving by ...

    Problem-solving strategies in Artificial Intelligence are steps to overcome the barriers to achieve a goal, the "problem-solving cycle". Most common steps in such cycle involve- recognizing a ...

  23. Problem-solving methods in artificial intelligence

    A problem-solving framework in which aspects of mathematical decision theory are incorporated into symbolic problem-resolution techniques currently predominant in artificial intelligence is described, illustrated by application to the classic monkey and bananas problem. 135. PDF.

  24. Is Your AI-First Strategy Causing More Problems Than It's Solving?

    Summary. The problem with an AI-first strategy lies not within the "AI" but with the notion that it should come "first" aspect. An AI-first approach can be myopic, potentially leading us ...

  25. Problem Solving in Artificial Intelligence

    Problem-solving methods in Artificial Intelligence. Let us discuss the techniques like Heuristics, Algorithms, Root cause analysis used by AI as problem-solving methods to find a desirable solution for the given problem. 1. ALGORITHMS. A problem-solving algorithm can be said as a procedure that is guaranteed to solve if its steps are strictly ...

  26. Tackling global challenges with artificial intelligence

    AI: Transforming problem-solving through innovation. AI has emerged as a transformative technology with the potential to revolutionize problem-solving approaches across diverse domains. At its core, AI replicates human cognitive processes, enabling machines to learn, reason, and make decisions.

  27. Approaches to AI Learning

    In processing data and solving a problem, the algorithm defines, refines, and performs a function. The function is always specific to the type of problem being addressed by the algorithm. ... There are four types of artificial intelligence approaches based on how machines behave - reactive machines, limited memory, theory of mind, and self ...

  28. Soft Computing and Machine Learning Applications for ...

    Soft Computing (SC) is an Artificial Intelligence (AI) approach that is more effective at solving real-life problems than traditional computing models. Soft Computing models are tolerant of partial truths, impressions, uncertainty, and approximation, in handling and providing useable solutions to complex problems. The applications of Soft Computing and Machine Learning (inclusive of Deep ...

  29. Artificial intelligence and illusions of understanding in scientific

    The proliferation of artificial intelligence tools in scientific research risks creating illusions of understanding, where scientists believe they understand more about the world than they ...

  30. Black-winged kite algorithm: a nature-inspired meta ...

    This paper innovatively proposes the Black Kite Algorithm (BKA), a meta-heuristic optimization algorithm inspired by the migratory and predatory behavior of the black kite. The BKA integrates the Cauchy mutation strategy and the Leader strategy to enhance the global search capability and the convergence speed of the algorithm. This novel combination achieves a good balance between exploring ...