Literature DB >> 35713832

A thematic analysis-based model for identifying the impacts of natural crises on a supply chain for service integrity: a text analysis approach.

Mohammad Reza Sheikhattar1, Navid Nezafati2, Sajjad Shokouhyar1.   

Abstract

Numerous studies have been conducted to identify the effects of natural crises on supply chain performance. Conventional analysis methods are based on either manual filter methods or data-driven methods. The manual filter methods suffer from validation problems due to sampling limitations, and data-driven methods suffer from the nature of crisis data which are vague and complex. This study aims to present an intelligent analysis model to automatically identify the effects of natural crises such as the COVID-19 pandemic on the supply chain through metadata generated on social media. This paper presents a thematic analysis framework to extract knowledge under user steering. This framework uses a text-mining approach, including co-occurrence term analysis and knowledge map construction. As a case study to approve our proposed model, we retrieved, cleaned, and analyzed 1024 online textual reports on supply chain crises published during the COVID-19 pandemic in 2019-2021. We conducted a thematic analysis of the collected data and achieved a knowledge map on the impact of the COVID-19 crisis on the supply chain. The resultant knowledge map consists of five main areas (and related sub-areas), including (1) food retail, (2) food services, (3) manufacturing, (4) consumers, and (5) logistics. We checked and validated the analytical results with some field experts. This experiment achieved 53 crisis knowledge propositions classified from 25,272 sentences with 631,799 terms and 31,864 unique terms using just three user-system interaction steps, which shows the model's high performance. The results lead us to conclude that the proposed model could be used effectively and efficiently as a decision support system, especially for crises in the supply chain analysis.
© 2022. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.

Entities:  

Keywords:  Pandemic crisis; Supply chain crisis management; Supply chain risk monitoring; Text-mining; Thematic analysis model

Year:  2022        PMID: 35713832      PMCID: PMC9204682          DOI: 10.1007/s11356-022-21380-x

Source DB:  PubMed          Journal:  Environ Sci Pollut Res Int        ISSN: 0944-1344            Impact factor:   5.190


Introduction

Global supply chains (SC) have always been vulnerable to shocks, especially in exporting countries (Trautrims et al. 2020; Shanker et al. 2021). This vulnerability stems from the factors that disrupt the integration of the production of goods and services between these countries and their importer trading partners (Maillet et al. 2019). Recently, the COVID-19 pandemic has posed serious threats and risks to the SC in all sectors (Karwasra et al. 2021; Sharma et al. 2020). These risks are mainly due to lockdown measures that governments adopt and apply in the form of health policies against the COVID-19 outbreak, which imposes restrictions on “movement,” “production,” “logistics,” and “business activities” (Menut et al. 2020; Karwasra et al. 2021). For example, China was one of India’s primary importers of raw materials for drug production between 2018 and 2019. Due to the pandemic, the import of raw materials was disrupted, and the Indian pharmaceutical industry faced shortages and delays in the supply and distribution of pharmaceutical products (Guerin et al. 2020). The shipping industry, which involves about 90% of world trade, has been severely affected by government lockdown policies. Hence, the World Trade Organization (WTO) announced that the volume of world trade has decreased by about 23% by the end of 2020 (Palmeter et al. 2022). Also, due to disruption in the food SC, 285 million people worldwide suffered from severe starvation by the end of 2020 (Rizou et al. 2020). Such significant changes significantly increase the need to examine the pandemic effects on the SC as an essential and distinctive case study. In this regard, due to the extensive discussions and scientific research that is conducted on the consequences of the COVID-19 crisis on social media, this media has become a constructive database for identifying and analyzing the risk in the SC (Aghion et al. 2008; Bakhtin et al. 2020; Spitsberg et al. 2013; Momeni and Rost 2016). Various quantitative and qualitative methods have been used to analyze the SC crisis in the literature (Wan et al. 2021). The comprehensiveness of the findings in qualitative methods is questionable due to the limitations of the samples (Ponis and Ntalla, 2016). There is also concern about the bias of results in these methods due to human judgments (Arlinghaus et al. 2020; Schorsch et al. 2017; Puljić, 2010). Following the increase in data and analytical news on social media, research has recently moved to the use of quantitative data-driven methods. However, these methods also have major drawbacks. Due to the nature of risk, which is ambiguous and complex and has different meanings, processing these data by pure data mining methods is impossible and requires human insight to identify risks. This scientific gap in the analysis and identification of crisis management continues. Therefore, one of the serious motivations for this research is to provide a practical framework for analyzing and identifying the effects of a crisis on the SC using a large amount of available data. This framework gathers the contents of the online analytical reports, converts these data into interpretable structured information, and finally provides classified knowledge on crisis analysis through user interactions. The framework could be used in decision support systems of companies or governments. To validate the proposed model’s effectiveness, we collected 975 online reports about the impact of COVID-19 on SCs between 2019 and 2021 and analyzed them using the framework. In general, this paper presents a framework to analyze the effects of a crisis on the SC and reports the results from a case study to evaluate the effectiveness of the proposed framework. The results user achieves from interactions using thematic analysis could also be used to unfold the research gap and future research guidelines in SC crisis analysis. The rest of the paper is organized as follows. The “Literature review and related works” section provides available knowledge for the research topics and related research and expresses the research gap. The “ Methodology: framework for identifying the effects of natural crises on the supply chain” section presents the methodology used in this research. The “Case analysis” section evaluates the framework’s effectiveness by analyzing the impact of COVID-19 on the SC case study. The “Discussion” section discusses the research findings, and finally, the “Conclusion” section concludes the paper.

Literature review and related works

Text analysis technical methods

Various text analysis techniques related to this research are discussed in this section, including Word2vec, knowledge map, TFIDF, topic modeling, and text-mining.

Word2vec model

In order to process texts, words must be converted to numerical quantities. One of the most widely used methods is the word2vec model. The Word2vec model is a neural network–based model that converts words into numeric vectors (Mikolov et al. 2013). Its input is a text corpus, and its output is a set of real-valued feature vectors; each vector represents or encodes the meaning of one word in the corpus. Thus, the words closer in vector space are expected to be similar in meaning (Jurafsky and Martin 2008). There are two models, Continuous Bag-of-Words (CBOW) and Skip-gram, for learning underlying word representations for each word by using neural networks in the word2vec model. The CBOW model is used for word2vec in this paper.

Co-occurrence network and knowledge map

One of the text analysis methods used in this research is the co-occurrence network, also referred to as the knowledge map (KM) (Segev 2020). This method involves visualizing potential relationships between concepts or other entities represented in textual data (Freilich et al. 2010). The process of building a co-occurrence network includes (Segev 2021): Identifying keywords in the text, Calculating the frequencies of terms, and Analyzing networks for finding central terms and clusters of terms in the network. In co-occurrence networks, the connecting pairs of terms are defined based on their paired presence in a textual unit. Co-occurrence networks are generated by linking pairs of terms using a set of criteria that define concurrency (Freilich et al. 2010).

TFIDF

The term frequency-inverse document frequency (TFIDF) is determined to filter public words from the dataset and highlight the most important words (Jing et al. 2002). It has been widely used to improve processing accuracy as an indicator of general and important terms (Gudivada et al. 2018). The principle of applying TFIDF is explained as follows. By applying this method, terms that are frequently used in any document (such as conjunction words) have a low rank (Qaiser and Ali 2018). The equations (Eqs. 1–3) show TFIDF calculation: In the above equations, is the number of times that term t occurs in document d, N is the number of documents in the corpus, and is the number of documents where the term t appears. If the term is not in the corpus, this will lead to a division-by-zero. Therefore, it is common to adjust the denominator to .

Topic modeling

In machine learning language and natural language processing, topic modeling is a distinctive statistical model for discovering essential topics in a set of documents. Topic modeling is a widely used text-mining tool to find hidden semantic structures in a text (Blei 2012). Since each document is about a specific topic, certain words are expected to appear in its textual content. The topics extracted from topic modeling technique are a group of similar words cluster. Topic models are statistical algorithms to discover the hidden semantic structures of a text (Cao and Li 2007). One of the most common methods for topic modeling is the latent Dirichlet allocation (LDA) (Blei 2012). LDA is an unsupervised learning model used as a topic modeling technique that can classify text in a document into specific topics. This technique uses the Dirichlet distribution to discover topics for each document model and words for each topic model. Figure 1 presents the core components of LDA algorithm, where K is the number of topics, N is the number of words in the document, M denotes the number of documents, α is the parameter of the Dirichlet prior to the per-document topic distributions, β is the same parameter of the per-topic word distribution, φ(k) is the word distribution for topic k, θ(i) is the topic distribution for document i, and Z (i, j) is the topic for the jth word in document i. Equation 4 is shown below:
Fig. 1

LDA model

LDA model Each word in the corpus is randomly classified into a subject and tagged with a specific subject number. Then, according to the algorithm sample function, a new subject number is assigned to each word. This process continues until it converges.

Text-mining and its applications in supply chain management

The existing literature confirms the critical role of artificial intelligence in various industry fields such as healthcare, education, crisis management, and production. For example, Sharifi et al. (2021) studied the impact of artificial intelligence and digital style on industry and energy after the COVID-19 pandemic. Their object was to investigate the effects of COVID-19 on the various fields of medicine, industry, and energy. Artificial intelligence technology tries to improve the efficiency of the management process during the crisis response. In particular, they have studied the effect of artificial intelligence and digital style in reducing the damage of this deadly virus. Ahmadi et al. (2021) used the capabilities of artificial intelligence to provide an extended pandemic model for estimating the COVID-19 epidemic and assessing its risks. They presented a generalized logistics growth model (LGM) to estimate COVID-19 outbreak sub-waves in Iran. Nasirpour et al. (2021) used multivariate spatial autoregressive (MSAR) to investigate the relationship between solar activity and COVID-19 and to predict possible future viruses. Artificial intelligence technology tries to improve the efficiency of the management process during the crisis response. Baryannis et al. (Baryannis et al. 2019) conducted a comprehensive review of the SC literature that addresses problems relevant to SCRM using approaches within the range of artificial intelligence. To this end, they conducted research on different definitions and classifications of SC risk and related concepts such as uncertainty. Then, a mapping study was performed to categorize the existing literature based on the AI method used, from mathematical programming to machine learning and big data analysis, and the specific SCRM task they address. Their research points out that while risk management is fraught with challenges, identifying sources of risk-related information is one of the primary concerns. This type of data shows the importance of text-mining, a subset of artificial intelligence in analyzing SC risk data. Text-mining, also known as text data mining, transforms unstructured text into a structured format to identify meaningful patterns and new insights. In text-mining, descriptive and predictive analyzes can be used (Dang and Ahmad 2014). Typical text-mining tasks include the following: Text categorization and clustering, Concept/entity extraction, Association rule mining, Sentiment analysis, Document summarization, Visualization, Entity-relation modeling (i.e., learning relations between named entities). For example, Chiu and Lin (2018) merged text-mining and Kansei Engineering (Nagamachi 1995) to derive Kansi’s descriptive terms based on actual customer surveys and use it to predict the design of a consumer-preferred product while reducing the specific repetitive tasks of designers. The accuracy of traditional text-mining in the analysis of texts, especially texts with complex meanings, is very low because it cannot use the semantic information of the text effectively (Hu and Zhang 2010). Ontology can extract key concepts and inter-relation and thus provide a common understanding of a domain. Combining these two techniques can obtain accurate text-mining analysis (Hu and Zhang 2010). Elbattah et al. combined domain ontology and text analytics techniques to analyze data in healthcare (Elbattah et al. 2021). Text-mining is also used to access knowledge about patents, known as patent knowledge retrievals. Liu et al. proposed a function-based patent knowledge retrieval tool for the conceptual design of innovative products. These previous studies show that the emerging field of text-mining can turn natural language into practical results, gain new insights, manage information loads, and use artificial intelligence in decision-making. One research area in which text-mining has been widely used is the SC. The advent of modern information technologies such as IoT, big data, blockchain, and artificial intelligence have created new opportunities for efficient SC management. For example, Akundi et al. gathered information from various textual sources (e.g., tweets, news, and other social media) to understand how textual data about a given smartphone could affect its SC and management (Akundi et al. 2018). Mayer et al. examined how text-mining could provide insights into the impact of the coronavirus epidemic on SCs, focusing on epidemic consequences for SC structures related to risk, flexibility, and sustainability (Dowling et al. 2019). They showed that some SC topics, such as risk, flexibility, disruption, and consistency, differ in their news coverage on the type of newspaper and the number of coronavirus disease 2019 (COVID-19) infections. Aday and Aday (2020) evaluated the impact of COVID-19 on the agri-food sector. They summarized the recommendations needed to reduce and control the effects of the pandemic by using textual report analysis. Su and Chen (2018) developed a Twitter-enabled supplier status assessment tool to improve supplier selection. They applied text-mining on Twitter tweets to retrieve supplier-related information and analyze potential risks and uncertainty. They used text-mining in Twitter tweets to retrieve supplier information and analyze risks and potential uncertainties. They applied text-mining on Twitter tweets to retrieve supplier-related information and analyze potential risks and uncertainty. The proposed method was proven to improve the efficiency and accuracy of cross-border e-commerce (CBEC) commodity risk evaluation.

Research gap

Numerous qualitative methods have been developed to analyze the effects of the crisis on the SC and related risks, such as fuzzy cognitive maps (Bevilacqua et al. 2020), fuzzy AHP approach (Nazam et al. 2020), pattern matching technique (Köksal et al. 2018), analytic network process (Martino et al. 2017), explorative qualitative study (Kam et al. 2011), qualitative survey approach (Moon et al. 2010), and Delphi method (Cerruti and Delbufalo 2009). Many of these methods have validation challenges due to limited samples (Ponis and Ntalla, 2016). On the other hand, qualitative methods based on human judgments have raised concerns about possible biases (Giannakis and Papadopoulos 2016). Today, large amounts of SC crisis data are available as text-based information from social networks, open portals, and databases for academic purposes. In this regard, new research has been directed toward data-driven methods, a text analysis approach to monitoring SC crises and understanding risk patterns (Yan et al. 2019; Chu et al. 2019; Shah et al. 2021; Kara et al. 2020; Da Silva et al. 2020). However, due to the nature of risk data, which are complex, ambiguous, and contradictory, pure data mining methods do not meet the required precision, effectiveness, and efficiency; and even may mislead (Wang and Ye 2018). In this regard, using the ability of text-mining algorithms to analyze a large amount of data with assist of human intelligence to guide the algorithm and iteratively modify the results can fill the gap in identifying risk factors and investigating risk effects.

Methodology: framework for identifying the effects of natural crises on the supply chain

Our proposed framework is based on the thematic analysis combined with text-mining methods (Braun and Clarke 2006; Guest et al. 2012). According to Fig. 2, the framework consists of three main layers:
Fig. 2

The proposed framework

“Data gathering” on related topics; “Pre-processing” to reduce noise and data preparation; “Knowledge discovery” to analyze and extract knowledge in an interaction with the user. The proposed framework The layers are explained in the following subsections.

Data gathering layer

The data gathering layer collects data from data sources, generally web pages and social media posts. This layer contains one component for data collection and delivers the collected data to the next layer, pre-processing layer. Although data sources are in different formats, e.g., text, image, voice, and video, we focus on the text format because natural language processing techniques are more mature than processing other formats. Since only the text sources are considered in the proposed platform, this layer prepares the data in the form of a text corpus for the pre-processing layer. This research focused on identifying the impacts of natural crises on the food SC. Therefore, the data sources include newsletters, reports, and events on food, risks, logistics, freight, operations, regulations, and technology.

Pre-processing layer

In Fig. 2, the pre-processing layer receives the corpus from the data gathering layer. It prepares some transformed corpus forms for other processes in the knowledge discovery layer. As shown in the pseudo-code of Fig. 3, this layer contains four components to perform four actions on the corpus and prepare transferred forms of the corpus, correspondingly:
Fig. 3

Pseudo-code for the Pre-processing

Normalized_corpus creation: This component removes all URLs, stop words (e.g., “a,” “an,” “them,” “it”), and special characters (e.g., HTML and XML tags) from the text. Furthermore, this component replaces lemmatized form of every word; therefore, the words with the same base also have the same form. Tokenized_corpus creation: This component splits the input corpus into its constituent tokens, chunks of information that can be considered discrete elements with a useful semantic unit for processing. Word2Vec Model generation: This component generates the Word2Vec model of the corpus, which is used in the knowledge discovery layer to identify similar terms. Corpus_sentences creation: Sentences are considered knowledge units (KUs) in the proposed framework. Therefore, this component divides the corpus into separate sentences. Pseudo-code for the Pre-processing

Knowledge discovery layer

The core part of the framework is the knowledge discovery layer, which provides a user-system interaction facility to extract knowledge iteratively. As shown in Fig. 2, this layer consists of three component groups: crisis KM handling, crisis query handling, and knowledge extraction/evaluation. Figure 4 shows the corresponding pseudo-code of this layer. More details are described below.
Fig. 4

Pseudo-code for the knowledge discovery layer

Pseudo-code for the knowledge discovery layer

Crisis knowledge map handling

As discussed in the “Literature review and related works” section, KM is represented as a network whose nodes are terms from the tokenized corpus, and the links are the meaningful relations between the nodes. The KM is created based on a top-down network of terms. The user could select any term from any level to be expanded into more related terms at the next level as co-occurrence terms. This component group comprises LDA_Cluster Visualization, Expand_KM, and Visualize_KM.

LDA_Cluster visualization

To create a KM, it is necessary to extract important topics in the whole corpus as key terms. These key terms, which form the first level of the KM, are selected by the user with the assistance of topic modeling and visualization of clusters. At the first step, the KM is initialized to a single node, root_node (line 5 of Fig. 4). Then, clusters are identified using LDA (line 6 of Fig. 4) and visualized as word clouds (lines 7–9 of Fig. 4). Afterward, level 1 of KM is constructed, adding the terms selected by the user from the visualized clusters to the initialized KM with the associated links to the root_node (lines 10–14 of Fig. 4). Level 1 of KM is shown to the user for further expansion of the KM (line 15 of Fig. 4).

Expand_KM

After creating the first level of KM, in this step, each selected term (which is called key_term) expands to a subset of relevant terms, called co-occurrence terms, at lower levels in KM. This operation is repeated for each selected term in each layer and expands to lower layers based on user preferences until the user stops the expansion operation. This is shown in the pseudo-code of Fig. 3. The main part of the program is the iterative loop, located on lines 16–26. The algorithm then uses the Expand_KM procedure to expand this expression to co-occurrence terms at the lower layer. The operation of Expand_KM component is shown as the Expand_KM procedure (lines 28–36 of Fig. 4). The Expand_KM procedure performs three main actions: (1) extracting context terms, (2) filtering unnecessary terms, and (3) extending the knowledge map.

At first, it extracts context terms using Extract_Context_Terms function, in which the context_terms and the number of their occurrences are extracted. The context terms are terms in the neighbor of the central selected key_term in the win_size range throughout the tokenized_corpus. The context terms and their relationship to the key term and the window size range are shown in Fig. 5.
Fig. 5

Relationship between context terms and key term

Relationship between context terms and key term

Filtering unnecessary terms

As the second action, Expand_KM filters unnecessary terms. Therefore, the co_occurrence_terms are extracted as a key_term expansion in the KM. These co-occurrence terms are a subset of context terms that have a meaningful relationship with the key term and have not been randomly placed in the neighbor of the key term. In order to extract co-occurrence terms, two filtering operations are applied to context_terms to remove unnecessary terms: based on the TFIDF threshold and based on the phi threshold. The TFIDF threshold filter (line 30 of Fig. 4) calculates the TFIDF parameter of the context_terms obtained from the previous step and removes terms whose TFIDF values are less than the TTFIDF threshold tuned by the user (line 18 of Fig. 4). After filtering operation in this step, the most important terms from freq_terms are extracted and stored in the imp_terms (important terms) variable. The second filter, the phi filter, is based on the phi coefficient threshold. The phi coefficient is used to measure the correlation of important terms with the key term. As shown in line 31 of Fig. 4, in this filter, imp_terms obtained from the previous step whose φ values are not within the phi threshold (T) are removed in this step. The remaining terms of the filtering operation are stored in the co_occur_terms variable. Table 1 shows the required values for calculating the φ parameter of words X and Y in a document.
Table 1

The required values for calculating the φ parameter of words X and Y in a set of documents

# documents containing term Y# documents does not contain term YTotal
# documents containing term X\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${n}_{11}$$\end{document}n11\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${n}_{10}$$\end{document}n10\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${n}_{1.}$$\end{document}n1.
# documents does not contain term X\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${n}_{01}$$\end{document}n01\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${n}_{00}$$\end{document}n00\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${n}_{0.}$$\end{document}n0.
Total\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${n}_{.1}$$\end{document}n.1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${n}_{.0}$$\end{document}n.0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${n}$$\end{document}n
The required values for calculating the φ parameter of words X and Y in a set of documents In Table 1, n11 represents the number of documents that contain both the words X and Y, n00 represents the number of documents that contains none of them, and n10 equals the number of documents containing X and do not contain Y, and similarly, n01 equals the number of documents containing Y and does not contain X. The total values are also shown in the table, n equals the number of all documents. Equation 5 shows the calculation of the coefficient φ to obtain the correlation between the words X and Y in Table 1. The correlation coefficient φ varies from − 1 to + 1. The value of + 1 indicates the maximum agreement, the value of − 1 indicates the disagreement, and the value of 0 shows no relationship.

Extending the knowledge map

As the third action of Expand_KM, all the terms are considered as nodes, and also their relation with the key term from which they are expanded is stored to create and visualize a KM. Each term in the co_occur_terms variable is added to the KM.nodes as a node, and the relation between this node and the key term from which it is generated is stored in the KM.links variable (lines 32–35 of Fig. 4).

Visualize_KM

This component is used to visualize the relationship between key terms and their co-occurrence terms. This operation is shown in the Visualize_KM component in Fig. 2. After expanding the knowledge map in the previous step, the nodes and the relationships between the nodes are saved in KM variable (containing KM.nodes for nodes and KM.links for links). The Visualize_KM function visualizes the knowledge map for the user (line 20 of Fig. 4).

Crisis query handling

Enrich_Query

To extract the relevant KUs, it is necessary to have a proper query. To create a suitable query, finding the correlated terms in the relevant knowledge domain is essential. The KM created in the previous step expresses the relationship between terms in a (tree-like) directed acyclic graph. The query is a traverse of terms from root_node to a leaf node in the KM network selected by the user. The request extracted from KM may not include all aspects, and it may be necessary to add or remove concepts. For this purpose, the query is sent to the Enrich_Query component for enrichment (Fig. 2). This operation is presented by the pseudo-code of the simulation in Fig. 4. In line 21 of the pseudo-code by the Get_Query function, the algorithm gets the query that the user has created from the KM and puts it in the query variable. In line 22 of the pseudo-code, the Enrich_Query function creates an enriched query. Details of the Enrich_Query function are shown in lines 37–47 of the pseudo-code. Due to the algebraic ability in word embedding to add and remove terms, we used “ + ” and “ − ” signs to add and remove concepts (terms) to create s suitable query. Two variables, positive_terms and negative_terms, are initialized (line 38). The terms that the user puts in the positive_terms variable are the concepts to be added, and the terms he puts in the negative_terms variable are those to be removed. In lines 39 to 45 of the pseudo-code, the user decides which terms to be added to each variable. The related mathematical operation is:where represents the “sum of positive terms vector” and represents the “sum of negative terms vectors.” The system extracts for enriching query from the vocabulary V (the set of unique words used in the text corpus). This operation is performed by finding the term with maximum angular similarity to the vocabulary vector (expressed as vector dot product, assuming all term vectors have the same length). The user selects some terms in the Similar_Term variable as selected_Similar_Term to enrich the query, as shown in line 48 of pseudo-code of Fig. 4. To create an enriched query, selected_Similar_Term is added to positive_terms to create an enriched_query variable, as shown in line 49 of pseudo-code in Fig. 4. As a result, mathematically (enriched query vectors) are created according to Eq. 7. The query enrichment operation is shown in the framework presented in Fig. 6.
Fig. 6

Query modification operation

Query modification operation

Knowledge extraction/evaluation

Extract_KU

After creating a user-interested crisis query, the Extract_KU component of Fig. 2 extracts the most similar KUs to the query. KUs are the most similar corpus sentences to enriched queries representing knowledge propositions. The KUs extracted by the machine allow the analyst to make a hypothesis about the crisis. It also steers the analyst’s subsequent searches to select appropriate key terms. It is necessary to convert text words into numerical quantities to conduct calculations on queries and KUs. We have used the word2vec algorithm, which processes text by vectorizing the words. Each SC document ( consists of n numbers of terms representing the spatial–temporal information of an SC. The basic idea is to extract unique content-bearing terms from the set of SC documents and afterward assign weight to every term based on the product of term frequency and inverse document frequency (TFIDF). Consequently, these terms should be treated as a numerical representation of the features of the similarity algorithm. Therefore, we have represented an SC document () through the d-dimensional feature vector in the term space as ; where is the weight assigned to each term in the SC document as:where is the frequency of the term (j) in a SC document (i) and IDF is the inverse document frequency calculated as: The numerical representation of the SC sentences vector (KUs vector) was thus represented through these weighted SC terms. Equation 10 shows the equivalent formal mathematical description for sentence vector:where SC refers to each textual document in the corpus. is the term vector of each term (t). is the frequency of t in each SC (SC). is the reverse frequency of documents that include , and m is the total number of terms in each sentence. A fixed value has been added to prevent the denominator from becoming zero. However, we used cosine similarity as the distance function in the similarity calculation between queries and according to Eq. 11, and then KUs will be ranked accordingly. Figure 4 shows the simulation code of the above operation. In line 23, The Extract_KU function takes the two parameters enriched_query and corpus_sentences, extracts a list of sentences (KUs) from corpus_sentences, and ranks them based on their similarity enriched_query. The Extract_KU function takes two parameters, enriched_query, and corpus_sentences, extracts a list of the most similar sentences in corpus_sentences to enriched_query and ranks them by their similarity values.

Evaluate_KU

After extracting the crisis KUs, it is necessary to evaluate the information value of these KUs to create a hypothesis or knowledge with high validity. To determine the information value of the extracted KUs, first, a similarity diagram is created between the query and each crisis KU, as shown in Fig. 7. As can be seen in this diagram, the gradient of the curve is severely broken in some places. Each break in the gradient could be interpreted as the border between two separate clusters (C1 to C5 in Fig. 7). Therefore, for each cluster , the information value of cluster c, , is measured by Eq. 12.
Fig. 7

The diagram of KU similarity with query

The diagram of KU similarity with query In the above equation, is the similarity value of ith KU from jth cluster. is the standard deviation of the . M is the number of KUs in the cluster. In line 24 of pseudeo_code in Fig. 4, by Evaluate_KU function, the KUs are evaluated and sorted based on “cluster information value” and the most valuable sentences are selected.

Case analysis

In this section, to evaluate the framework operation, we have implemented it and ran a case to analyze the impact of COVID-19 on the SC.

Data collection and pre-processing

In this case study, we collected a large unstructured volume of SC crisis reports1 from professional websites, including Supply Chain 24/7,2 Supply Chain Quarterly,3 Supply Chain Dive,4 Supply Chain Management Review,5 SupplyChainDigest,6 Supply Chain Digital,7 SupplyChainDigest,8 and Commercial Risk online9 by using the crawling technique. These websites include analytical reports and articles focusing on the supply chain in logistics, transportation, operations, logistics, regulation, technology, and risk. The topics of the articles and the titles of the interviews prepared by analysts and managers of various industries show that these information sources can be suitable for the input to the presented framework. Many of the mentioned news websites have tagged reports on numerous topics in the field of SC crisis, which significantly have facilitated access to the information we are interested in for this case. Through these websites, terms such as “supply chain crisis,” “supply chain risk AND management,” “supply chain crisis AND COVID-19 impact,” and “supply chain AND pandemic impact” were searched between the years 2019 and 2022. As a result, 1024 analytical reports on the supply chain crisis were collected. Severe inspections were carried out to remove repeated and irrelevant documentation. Finally, 975 articles have been determined to be suitable for further analysis. Then, we cleaned up and pre-processed these data to remove irrelevant information and prepare them for analysis. After the normalization operation, 30 commonly used terms in the SC corpus were extracted, shown in Table 2. The resultant corpus contains 975 reports with 631,799 terms and 31,864 unique term forms. The most effective and meaningful terms in the corpus have been shown in Table 2.
Table 2

Top 30 terms in SC crisis corpus

RankingTermFrequencyRankingTermsFrequency
1Risk115516Countries1652
2Supply chain10,98617Market1546
3Management555318Company1493
4Crisis346519Industry1377
5Pandemic301520Demand1252
6Companies252521Products1186
7COVID236122Production1105
8Business216723Resilience1096
9Blockchain205524Logistics1076
10Food201125Market1054
11Technology200526Security1046
12Product197127System1035
13Disruptions182328Suppliers1027
14Service176529Customers1021
15Impact170130Trade1012
Top 30 terms in SC crisis corpus

LDA_cluster visualization

We applied topic modeling to the analytical reports using the LDA_Cluster, and the reports were divided into five clusters with different topics. Figure 8 shows the word clouds of the clusters, highlighting the most important terms in each cluster. One of the important terms of each cluster was selected by the user to create the first layer of KM. These selected terms are as follows: “food retail,” “food service,” “manufacturing,” “consumer,” and “logistics,” which are used to create the first layer of the KM.
Fig. 8

LDA_Cluster visualization

LDA_Cluster visualization

Knowledge map expansion

After constructing the first layer of the KM, the user could select any of the terms of the first layer as the key term for KM expansion. First, context_terms were extracted throughout the corpus in the win_size range. In this case study, we set the win_size to 20 for all steps; however, the user can assign any other values to it based on his insight. In the next step, two filtering operations were applied to all extracted context_terms: TFIDF filtering: The TFIDF of each context term was calculated and normalized to the 0–1 range. The terms whose TFIDF was less than the threshold value of 0.65 (set by the user) were ignored. The user determines the TFIDF threshold based on the field of study and context of analysis. The important context_terms for the selected key term were obtained at the end of this step. Phi filtering: Table 3 shows the values of parameter φ calculated for the most important context terms. The boldfaced values identify the terms with negative φ values, which are the terms that do not have a logical relationship with the key terms and should be ignored.
Table 3

Matrix of co-occurrence terms of key terms

Key termsContext termsnφKey termsContext termsnφ
Food retailStores10280.8740023Food serviceIndustry12580.7256924
Worker9850.8880104Government10120.8062149
Grocery9730.8763271Consumers8560.6725691
Foodstuffs9020.7659201Sector8230.7832588
Online7530.8841263Workers7890.2362459
Service5300.7506144Restaurant7060.4936511
Covid4230.3056318Products5260.5265426
Sector3200.8740073Demand3020.3826559
Employees2500.5756321Security2640.7269556
Shopping2050.2036214Delivery2120.0936559
E-commerce1050.1025964Waste156 − 0.2046952
Income980.6584220Challenge93 − 0.1969524
Availability86 − 0.1794235Packaging870.1795882
Banks52 − 0.0856321FAO75 − 0.0965289
Charge25 − 0.1794235Consumption65 − 0.1794235
Home12 − 0.0856321Restaurant56 − 0.0856321
ConsumerShopping9250.7856954ManufacturingAdapt8650.5652965
Users8560.4852115Sector7250.2265995
Law8020.69523485Product7020.0236955
Spending7250.3958442Covid6020.1958521
Behavior6520.7954215Opportunity523 − 0.5958442
Respondent5620.2953215Industry4060.2695225
Enforcement4230.1695584Demand3690.1236955
Food3650.2954225Automation3020.6958848
Live302 − 0.1558569Growth2560.3695584
Eating2580.5366549Impact2030.2998455
Drinking2030.6569542Operation1230.7514523
Crisis1950.1658447Companies1200.8541236
Comes102 − 0.065948Time92 − 0.8582263
Said96 − 0.3625529Technology820.6592215
Learning85 − 0.4595548Serving78 − 0.5658412
Half76 − 0.2958525Survive65 − 0.2325974
LogisticMarket8520.2369584
Costs8020.5265842
Law7250.6589445
Disruption6950.2569472
Transport5100.3369525
Management4530.3654258
Processing3260.2568445
Impact2370.6958444
Survey203 − 0.5258447
Covid1260.2569577
Firms1030.2569542
Recovery920.3256521
Monitor83 − 0.9658255
Availability62 − 0.3652458
Tool52 − 0.9655258
Matrix of co-occurrence terms of key terms At the end of the filtering operation, the co-occurrence terms of a key term selected by the user were obtained. These co-occurrence terms are the most important terms that semantically describe the key term (about the impact of COVID on the supply chain) concerned by the user. For example, as Table 3 shows, the terms “shopping,” “users,” “law,” “spending,” “behavior,” “respondent,” “enforcement,” “food,” “eating” and “crisis” were extracted as the context of “consumer” key term after filtering step.

Visualize_KM

In this component, the relationship between the key term and the context_terms was displayed in a KM. As shown in Fig. 9, all selected terms are derived from topic modeling (e.g., “food retail,” “food service,” “manufacturing,” “consumer,” and “logistics”) at level 1 of the KM were considered. Also, the resulting context_terms at the lower level are linked to the upper key term, respectively. For example, according to the KM in Fig. 9, the terms “products” and “supply” are the underlying terms for the key term “demand,” which are attached to it in the KM.
Fig. 9

Crisis terms knowledge map

Crisis terms knowledge map

Enrich_Query

Table 4 shows some details on query enrichment in the case we ran. The user selected the query “food retail + online” considering the KM to discover knowledge propositions throughout the corpus that define the relationship between “food retail” and “online” in times of crisis. For this purpose, terms related to this query were proposed by the Enrich_Query component to extract more relevant KUs. In the Enrich_Query component, the analogy rate of all terms in the corpus with the initial query was calculated using word embedding.
Table 4

Term analogy based on adding concepts

Initial query
Food retail + onlineAnalogy termChannelBusinessImpactCOVIDConsumerShoppingFrequencyPerformerCompany
Analogy rate0.4190.4130.4000.3910.3880.3850.3740.3710. 355
Food retail + workerAnalogy termCompanyConsumerSafetyCOVIDImpactBusinessSocialSocialTime
Analogy rate0.4790.4420.4210.4200.4080.4010.3930.3740.369
Food service + sectorAnalogy termHomeRestaurantProducerImpactPandemicCovidShoppingDeliveryLevel
Analogy rate0.4260.4020.3660.3560.3450.3320.3260.3010.255
Consumer + behaviorAnalogy termRestaurantBuyingSourceShopErrorEnforcementIndustrySupplierSupply
Analogy rate0.4780.4630.4560.4450.4360.4230.4220.4150.396
Term analogy based on adding concepts The analogy rate of the terms “channels,” “business,” “impact,” “COVID,” “consumer,” “shopping,” “frequency,” “performer,” and “company” to “food retail” was higher than others. Therefore, these terms were suggested as terms that could enrich the query. Three terms “channels,” “impact,” and “consumer,” shown in boldface, were selected from the terms suggested by the query enrichment component. Similar operations were performed for “food retail + worker,” “food service + sector” and “consumer + behavior” queries, as demonstrated in Table 4. Table 5 shows the enrichment of the query in terms of removing some concepts from the key term. Based on the KM in Fig. 9, COVID-19 has influenced the “logistics” part. One of the lower layer terms of this key term is “management.” The “logistics-management” query aims to examine the effects of COVID-19 on “logistics” without considering management concepts. Therefore, as shown in Table 5, the terms “truck,” “freight,” “impact,” “sea,” “consumer,” “transportation,” “frequency,” “congestion,” and “company,” which are more similar to this query and can enrich it, have been extracted. From the obtained terms, the terms “truck,” “freight,” “sea,” “consumer,” and “transportation” were selected for enrichment.
Table 5

Term analogy based on remove concepts

Initial queryNew query
Logistic-managementAnalogy termTruckFreightImpactSeaConsumerTransportationFrequencyCongestionCompany
Analogy rat0.4610.4580.4010.3650.3860.3850.3790.3710.356
Term analogy based on remove concepts The terms vector with a higher analogy rate to the query is closer to the query vector than the other terms vector. In Fig. 10, the relationship between the two-dimensional representation of the query vector “food retail + online” and the term vector “channel; business; impact; COVID; consumer” is shown as an example.
Fig. 10

Two-dimensional display of “food retail + online” query vector neighboring to “channel; impact; consumer; business; COVID” terms vector

Two-dimensional display of “food retail + online” query vector neighboring to “channel; impact; consumer; business; COVID” terms vector

Knowledge extraction/evaluation

Based on the created query, KUs were extracted from the corpus. Then, the information value of each KU was calculated, and the most valuable KUs were extracted. The resultant evaluated KUs are shown in Fig. 11, and the details of extracting them are described below:
Fig. 11

Evaluated knowledge units outputs

The query was selected as “food service + sector + restaurant.” After enriching the query, the resulting query was extracted as “food service + sector + restaurant + home + producer.” According to the percentage of IVs for each cluster, shown as a pie chart, with total values of 43%, 40%, 15%, and 2%, clusters 1 to 3 have more significant IVs with 14 KUs. A set of KUs focusing on the enriched query revealed that the effects of the COVID-19 on the food service industry have been significant. A set of KUs showed that COVID-19 had greater influence over downstream retail and food service sectors, mostly informal SMEs. Another constructed query was the “food retail + online.” Query enrichment operation was performed. The resulting query has been extracted as “food retail + online + channel + impact + consumer.” The graph of “Similarity value of extracted KU with the query” has five different clusters. The percentage of IV in each cluster is 52%, 25%, 16%, 6%, and 1%. Clusters 1 to 4 have high IV values and include 14 KUs. Some of the extracted KUs are shown in the output evaluated in the figure. A set of KUs focusing on the query of “Food retail + online” showed that due to the pandemic crisis and the desire to minimize contact between people, we had faced an increase in online food shopping, and it is likely that the online channel may become an integral part of food shopping. A set of KUs showed that increased prices imposed by online platforms could lead to inequality in access to food. Another query was the “food retail + worker.” After enriching the query, the resulting query was extracted as “food retail + worker + company + impact + safety.” The graph of “similarity value of extracted KU with the query” has six different clusters. The percentage of IV of each cluster is 42%, 20%, 13%, 19%, 5%, and 1%, respectively. Clusters 1 to 5 have high IV, which include 14 KU. The extracted KUs showed that supermarket chains are already facing an increasing demand for workers and workers receiving low wages and inadequate social security benefits. Based on the KM, a query was created as “consumer + behavior,” focusing on the term “consumer” as a key term. According to the percentage of IVs of each cluster regarding the total value, 42%, 38%, 18%, and 2%, clusters 1 to 3 have significant IVs, including 12 KUs. The extracted KUs are shown in the evaluated output in the figure. According to the KUs extracted based on a query, it is clear that the rapid spread of COVID-19 and its impact has led to an increase in food health-conscious behavior. By selecting the query as “consumer + behavior + restaurant,” it was found that customers of restaurants and shopping malls have fallen sharply. Also, by creating a query called “consumer + behavior + buying,” the extracted KUs show that consumer buying behavior has become more erratic. Due to the pandemic crisis, the use of digital tools in customers’ buying patterns has increased and will probably continue after this crisis. Online food ordering has increased dramatically, and the desire to buy healthy products has increased among consumers. KUs, by focusing on “shop,” and selecting the query as “consumer + behavior + shop,” show that traditional consumers increasingly use the frictionless shopping method during the pandemic era. At the same time, several extracted KUs show that fast delivery of food by farm shops is an advantage, while during the pandemic, stores have not been able to cope with the high demand of customers. The query “logistic-management” was selected. To construct a suitable query, all terms similar to the “logistic-management” expression have been used to modify the query. The resulting query has been extracted as “freight + truck + sea + transportation.” According to the IV percentage of clusters, 1 to 3 have significant IVs, 31%, 29%, 28%, 10%, and 2%, respectively. Some of the extracted KUs are shown in the figure. Based on the search for KUs focusing on “logistic-management,” several KUs were extracted, implying that due to the increase in inspections of shipments, logistics costs have increased to comply with border control protocols during the pandemic. Also, based on the KUs extracted with the focus on “logistic-management + freight + truck + sea,” this hypothesis was obtained that, in the ocean freight sector, due to the emerged crisis, sailing programs are subject to disruption, and ports and terminals are facing equipment imbalances. Operational restrictions have also led to delays in the delivery of goods, congestion, and rising final prices. Another selected query was the “manufacture + operation.” The enriched query “manufacture + operation + plants + products + demand + packaging” was constructed based on this query. The evaluated KUs show that clusters 1 to 3 have significant IVs, 45%, 32%, 21%, and 2%, respectively. The extracted KUs based on this query showed that sales of plant-based products had increased drastically during the pandemic. In the KUs discovered with the terms “demand + packaging” in the lower layers of the KM, the obtained KUs showed that food suppliers faced a sudden increase in sales, during the pandemic, due to the increased demand of consumers for packaged and processed foods. The demand for food robotics has also increased, emphasizing social distancing considerations. Evaluated knowledge units outputs

Discussion

The main goal of this study was to apply the KM and NLP techniques to identify the effects of natural crises on the SC. To achieve this goal, we proposed and implemented a framework. After applying topic modeling and analyzing word frequency and co-occurrences on 975 analytical reports related to the effects of the natural crisis on the SC, a KM of the five main related topics was created and expanded. This study used the KM (or co-occurrence network) of risk terms to derive and categorize 53 knowledge propositions, which express the impact of the COVID-19 crisis on essential parts of the SC in a categorized manner. The extracted categorized KUs from user-system interactions could be used to inform industry experts, governments, and academic researchers about the various effects of natural crises on the SC. Natural crises are disruptive and affect SC performance. Therefore, effective SC crisis monitoring is necessary to reduce costs and increase the organization’s sustainability in the long term. The proposed framework can help organizations automatically develop and expand a hierarchical KM of crisis effects. In addition, they can target critical issues judged by analysts to monitor risks, identify threats, and determine a corrective or protective course of action. The extracted terms of SC crises make a dictionary of crises that can be used to create a suitable query to extract knowledge propositions from analytical reports related to various SC crises. The resultant KM and KUs show five sectors of the pandemic’s impact: (1) food retail, (2) food services, (3) manufacture, (4) consumer, and (5) logistics, briefly described as follows: In the consumer sector: The consumption patterns have been changed during the COVID-19 pandemic. In the manufacturing sector: The consumer demands have been increased for packaged and processed foods; robots have been more used in the production process; and healthier materials such as vegetables, fresh, and healthier foods have been more produced. In the logistic sector: Due to the increase in inspections of shipments, logistics costs have increased to comply with border control protocols during the pandemic. In the food retail and the food services sectors: Various aspects have been affected by the pandemic crisis, such as increased demand for workers in supermarkets, drastic changes in consumption patterns, and food ordering. Providing such categorized knowledge propositions as a decision support system can assist SC analysts in using such a KM in short and long-term decisions in similar natural crises. By using text-mining capability in categorizing large volumes of data, this research solves the problem of validation of qualitative methods in crisis factors analysis. On the other hand, using human intelligence in steering text-mining algorithms to correct the obtained results has solved the problems in text analysis methods to analyze complex crisis data in related research.

Conclusion

The main goal of this research is to propose a text-analysis-based framework using co-occurrence term analysis and knowledge map construction to analyze a large volume of textual data, retrieved from websites and social media, concerning crisis effects on supply chains (SCs). We implemented the proposed framework, and to validate the framework, we conducted a case study of the effects of COVID-19 on SCs. For this purpose, 975 online analytical reports on related topics were gathered and cleaned as a corpus. Then, data pre-processing operations were performed to prepare the data for further processing. And finally, using topic modeling, term frequency, and co-occurrence analysis, a set of analytical reports related to the effects of natural crises on the SC was analyzed. By textual analysis, 53 crisis knowledge propositions were extracted from 25,272 sentences in the corpus. Also, a knowledge map was created by co-occurrence analysis from 631,799 terms and 31,864 unique terms that identify the relationship between the 110 main crisis factors. Using the implemented framework, a KM of crisis effects and a dictionary of crisis factors on the SC were created in five sectors: (1) food retail, (2) food services, (3) manufacture, (4) consumer, and (5) logistics. This KM made it possible to create a suitable query for extracting categorized SC crisis knowledge. This categorization can improve the decision-makers’ knowledge in dealing with SC crises. According to an expert panel, the categorized KUs have well monitored and reported the effects of the COVID-19 crisis on the SC in various sectors. In this research, using the implemented framework, we created a crisis KM and related knowledge extraction through user steering to identify the effects of the natural crisis on the SC.

Theoretical implication

This paper addresses two main research gaps. First, much research has been done to identify SC crises based on qualitative methods. These methods, in addition to sample limitations, are subject to human biases and judgments that cause results to be questionable. For example, survey data in research by Giannakis et al. (2016), based on human judgments, raise concerns about possible biases. Second, data-driven research has been conducted by analyzing large volumes of data to minimize the problem of data limitations and human judgments. These methods analyze the data using conventional text analysis methods, regardless of the nature of the risk (Wang and Ye 2018; Kara et al. 2020; Ganesh and Kalpana 2022). Risk data are ambiguous, complex, and have multiple meanings; thus, one risk term may fall into different sectors (Pai et al. 2003; Chiu and Choi 2016). Mere text analysis methods cannot make this distinction and require human steering (Jin et al. 2018; Chu et al. 2020). Our proposed method could be helpful to address both mentioned gaps. Since our method applies text-mining algorithms to analyze a large volume of SC crisis data, the sample limitation could be addressed. Moreover, our proposed method is based on user-system interactions; therefore, the complexity and vagueness of data could be resolved due to user feedback. To the best of our knowledge, no previous research has used this method for risk analysis, including analyzing the effects of natural crises on the SC and to be used in decision support systems. Thus, very briefly, this study significantly extends the existing literature on understanding the effects of the crisis on SC in two aspects: (1) using large volume of data and therefore resolving the sample limitation in the qualitative method and (2) enhancing the analysis performance, benefitting from the user steering in the framework.

Practice implications

SC crisis monitoring framework could be useful for both firms and governments. The firms could use this framework for their business continuity, and governments could use it for their effective policymaking (O’Rourke 2014). The current literature shows how specific decision-making processes before, during, and after a crisis can improve a company’s SC performance (Dayton and Bernhardsdottir 2010). The development of information networks has enabled companies to access and analyze crisis information transparently. Companies can make decisions and take action using crisis data analysis instead of evidence-based speculation (Jin et al. 2018; McAfee et al. 2012). Data analysis is used for various operations such as procurement, service, production, warehousing, and demand management in SC management. For instance, Nguyen et al. discussed using metadata in SC management (Nguyen et al. 2015). Considering the importance of SC crisis management, this study proposes a framework for monitoring and analyzing crisis data and establishing a crisis decision support system for firms, enabling them to respond appropriately to crises that disrupt SC performance. This framework combines human skills with text analysis techniques to provide efficient crisis data management, categorized related knowledge, and evidence-based decision-making.

Policy implication

Applying this framework can also be very useful for government policymakers to provide effective policies to support consumer services and maintain the continuity of the business market. Natural crises are often characterized by rapid expansion and affect different parts of the SC (Natarajarathinam et al. 2009); therefore, governments pursue prudent policies to limit its spread (Barnes and Oloruntoba 2005). Given the impact of these policies on the performance of all parts of the SC, creating a categorized knowledge database about the SC crisis can enhance policymakers’ knowledge to make more effective policies (Miroudot 2020). The large amount of data generated in information networks about SC crises has created a good opportunity for government policymakers to make evidence-based policies instead of intuition (Newig et al. 2016). Therefore, one of the objectives of this study is to propose a new decision support system that can guide policymakers through the analysis of crisis data and the creation of classified knowledge to create policies for SC crisis management. The extracted pieces of knowledge are used to inform advice, monitor, evaluate, and revise the decisions made by policymakers.

Limitations and future research

There are several limitations in the study which could be addressed in future research. First, the KM of crisis does not discover the type of relationship between crisis terms. Crisis terms are related to each other, and identifying these relationships can be very useful in constructing the appropriate queries. Second, the subject was limited to social media reports for analysis in the data collection phase. This is just for the purpose of representing the operation of the proposed framework. Various automation systems such as enterprise resource planning and customer relationship management in companies collect very important applicable data related to financials, SC, operations, commerce, reporting, manufacturing, and human resource activities that should be considered in the analysis. Hence, future developments of this research could focus on two aspects. First, it could focus on the analysis of dependencies among various crisis factors. Understanding the relationship between different types of crises significantly improves supply chain crisis classification and management. This can be achieved by creating ontology between the crisis factors. Secondly, for crisis analysis of various crisis data types such as automation systems, the future extension could focus on creating data analysis rules, identifying thresholds, and mapping the results to identify crisis factors in the supply chain.
  12 in total

1.  Kansei engineering as a powerful consumer-oriented technology for product development.

Authors:  Mitsuo Nagamachi
Journal:  Appl Ergon       Date:  2002-05       Impact factor: 3.661

2.  Big data: the management revolution.

Authors:  Andrew McAfee; Erik Brynjolfsson
Journal:  Harv Bus Rev       Date:  2012-10

Review 3.  The science of sustainable supply chains.

Authors:  Dara O'Rourke
Journal:  Science       Date:  2014-06-06       Impact factor: 47.728

4.  Safety of foods, food supply chain and environment within the COVID-19 pandemic.

Authors:  Myrto Rizou; Ioannis M Galanakis; Turki M S Aldawoud; Charis M Galanakis
Journal:  Trends Food Sci Technol       Date:  2020-06-15       Impact factor: 12.563

5.  The large-scale organization of the bacterial network of ecological co-occurrence interactions.

Authors:  Shiri Freilich; Anat Kreimer; Isacc Meilijson; Uri Gophna; Roded Sharan; Eytan Ruppin
Journal:  Nucleic Acids Res       Date:  2010-03-01       Impact factor: 16.971

6.  Exploring governance learning: How policymakers draw on evidence, experience and intuition in designing participatory flood risk planning.

Authors:  Jens Newig; Elisa Kochskämper; Edward Challies; Nicolas W Jager
Journal:  Environ Sci Policy       Date:  2016-01       Impact factor: 5.581

7.  Impact of lockdown measures to combat Covid-19 on air quality over western Europe.

Authors:  Laurent Menut; Bertrand Bessagnet; Guillaume Siour; Sylvain Mailler; Romain Pennel; Arineh Cholakian
Journal:  Sci Total Environ       Date:  2020-06-23       Impact factor: 7.963

8.  Revealing the relationship between solar activity and COVID-19 and forecasting of possible future viruses using multi-step autoregression (MSAR).

Authors:  Mohammad Hossein Nasirpour; Abbas Sharifi; Mohsen Ahmadi; Saeid Jafarzadeh Ghoushchi
Journal:  Environ Sci Pollut Res Int       Date:  2021-03-16       Impact factor: 5.190

9.  Presentation of a developed sub-epidemic model for estimation of the COVID-19 pandemic and assessment of travel-related risks in Iran.

Authors:  Mohsen Ahmadi; Abbas Sharifi; Sarv Khalili
Journal:  Environ Sci Pollut Res Int       Date:  2020-11-19       Impact factor: 4.223

Review 10.  The impact of artificial intelligence and digital style on industry and energy post-COVID-19 pandemic.

Authors:  Abbas Sharifi; Mohsen Ahmadi; Ali Ala
Journal:  Environ Sci Pollut Res Int       Date:  2021-07-16       Impact factor: 4.223

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.