Assessing Differential Item Functioning (DIF) For Pearson Test of English (PTE) a study of Test Takers with Different Fields of Study

Research Article | DOI: https://doi.org/10.31579/2688-7517/043

Assessing Differential Item Functioning (DIF) For Pearson Test of English (PTE) a study of Test Takers with Different Fields of Study

  • Hamed Ghaemi* 1*

1Bahar Institute of Higher Education, Mashhad, Iran

*Corresponding Author: Bahar Institute of Higher Education, Mashhad, Iran.

Citation: Hamed Ghaemi, (2022) Assessing Differential Item Functioning (DIF) For Pearson Test of English (PTE) A study of Test Takers with Different Fields of Study, J. Addiction Research and Adolescent Behaviour 5(3); DOI: 10.31579/2688-7517/043

Copyright: © 2022, Hamed Ghaemi, This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

Received: 07 March 2022 | Accepted: 17 March 2022 | Published: 29 April 2022

Keywords: differential item functioning (DIF); item response theory (IRT); likelihood ratio approach (LR); fields of study; Pearson test of english

Abstract

Differential Item Functioning (DIF), which is a statistical feature of an item and provides a sign of unpredicted performance of items on a test, occurs once dissimilar groups of test takers with the same level of ability show different performance on a single test. The aim of this paper was to examine DIF on the Pearson Test of English (PTE) test items. To that end, 250 intermediate EFL learners with the age range of 26 - 36 in two different fields of study (125, Engineering, and 125 Sciences) were randomly chosen for the analysis. The Item Response Theory (IRT) Likelihood Ratio (LR) approach was utilized to find items showing DIF. The scored items of 250 PTE test takers were analyzed using the IRT three-parameter model utilizing item difficulty (b parameter), item discrimination (a parameter), and pseudo-guessing (c parameter). The results of the independent samples t-test for comparison of means in two groups depicted that Science participants performed better than the Engineering ones particularly in Speaking & Writing and Reading sections. It is evident that the PTE test was statistically easier for the Science students at 0.05 level. Linguistic analyses of Differential Item Functioning items also confirmed the findings of the quantitative part, indicating a far better performance on the part of Science students. 

Introduction

The growth of the psychometric tests and testing procedures have been affected by virtue of social and political fluctuations within the few past decades (Owen, 1998). When psychometric tests are used to perform individual or group comparisons, item bias ought to be considered to lessen the unfitting interpretations. Test bias varies from test fairness in that it is usually measured quantitatively while test fairness is carried out subjectively and intuitively and it is not feasible to be described in absolute terms, indicating that no one can categorize tests as either fair or not fair. It can be taken as read that it is not the test characteristics being significant on its own but the scores’ interpretations and the results that are of overriding significance as the students' educational future is usually determined by these decisions. The term biased pertains to the applied instruments, testing procedures and the methods of scores interpretation. The scores’ differences between two groups don’t merely define the term bias (Osterlind, 1983). The term bias has been superseded by differential item functioning (DIF) showing that individuals who are parallel considering their level of ability have different performance on a test and gain various scores accordingly. Test bias or DIF is concerned with systematic errors and discloses the characteristics associating with item psychometric characteristics depicting that the items cannot measure impartially considering different individuals/groups.  In actual fact, DIF arises when "individuals from various classes have the similar ability level but display different likelihood in responding to an item accurately" (Osterlind, 1983, p. 32). Basically, non-DIF represents the situation in which the test takers with the analogous level of ability irrespective of their in-group differences have the same probability to answer an item correctly. DIF deals with the extent to which the test items differentiate between participants having the same ability level from various groups consisting of gender, ethnicity, education, etc. (Zumbo, 2007). Parameters contributing to item/test bias are "culture, education, language, socioeconomic status, and so on" (Van de Vijver, 1998, p. 35). Test bias or DIF should be evaluated and calculated during test construction process (Osterlind, 1983). Tests ought to be constructed in a way that when inconsistency in examinees’ test results is observed, such discrepancy is attributed to differences in the construct that the test is going to assess. By detecting and eliminating items demonstrating DIF as well as the analysis of items, test developers find problematic items lacking psychometric properties. This paper investigated item analysis of PTE, an internationally – recognized proficiency test, by means of item response theory (IRT) based on DIF study. 

Literature Review

  1.   Methods of DIF identification

Finding items demonstrating DIF permits the test developers to match the examinees with the pertinent knowledge. DIF is concerned with the students’ scores on the tests, their hidden ability’s measurement and examination of individuals being analogous with reference to their level of capability and come from various background though perform identical on an item. Mantel- Haenszel  

Test is used for detecting DIF (Mantel and Haenszel, 1959), suiting well even for small number of participants and empowers the test makers to utilize simple arithmetic measures based upon logistic regression methods proposed by Zumbo (2007). Modest arithmetic procedures offer a more in-depth explanation of DIF and permits the researchers to make distinction between uniform and non-uniform DIF. The other procedures to detect DIF employ IRT models as stated by Lord, (1980), Raju (1990), and Thissen, Steinberg, & Wainer (1994). These methods deal with examinees’ ability and characteristics of items more accurately and are more concerned with larger sample sizes. Among these models, IRT is used more by the researchers to spot items flagging DIF, as these models “render the most useful data for identifying differences on particular items” (Ertuby, 1996, p. 51). 

  1.  Models of Item Response Theory (IRT)

Most of the measurement procedures, in particular in the field of education and psychology, deal with the latent variables (Hambleton, 1996). The chance of answering correctly hinge on both item characteristics and examinees’ level of ability. Such a relationship is mathematically stated as item characteristic curve (ICC). Any ICC ought to envisage the examinees’ scores based on their underlying abilities, which is also recognized as item response function. The examinees’ level of abilities is shown along the X-axis and represented by theta (θ) while the probability of responding to items correctly is demonstrated on Y-axis and is shown by p (θ). As Baker (1985) proposed, the ICC shape rest on the item difficulty (b-parameter), item discrimination (a-parameter), and guessing power known as pseudo-chance (c-parameter). In fact, depending on horizontal location, ICCs might vary, spotting the individuals’ ability level against items’ difficulty. The likelihood of selecting the right answer is 0.50 (i.e., the likelihood of choosing the right answer is 50 percent). Larger b-values stand for more difficult items, ranging from -2.5 to +2.5 in theory. Meaning it differs from the very easy items to very tough ones. 

Item discrimination (a-parameter) displays the slope of the ICC and the accuracy of the measurement of a given item. The curve slope and item discrimination are positively correlated in a sense that the steeper slope shows more discriminating power of an item. The a-value ranges between 0~2. Those below 0.5 do not have discriminating power. The items having larger discrimination power may well differentiate the individuals. The guessing power (c-parameter) displays the probability a test taker with the bottommost level of ability answering the item accurately. The c-parameter ranges from 0 to 1. IRT models alter concerning the properties of items they involve. The one parameter or Rasch model has to do with the item difficulty and ability level of examinees. The two parameter model deals with item discrimination and Item difficulty (probability of getting the correct response based on examinees’ ability level). Third parameter or pseudo-chance parameter is realized when items have multiple-choice format and examinees can get the correct response by guessing. IRT models are unidimensional and independent. They are based upon the shape of ICC and examinees’ level of ability. 

  1. Non-uniform vs. Uniform DIF

DIF usually has two distinct categories with regard to logistic regression model: uniform and non-uniform. Uniform DIF affects the participants at all levels equally suggesting that ICC is precisely identical for two classes. De Beer (2004) believes that the likelihood to pick the correct answer is less than that of another class in uniform DIF. The shape of ICC for one class of testees is therefore below that of the other group in his opinion, as illustrated in Fig. 1.

Figure 1: Uniform DIF item (Adopted from De Beer, 2004)

 When two groups are different on their slopes, the item shows non-uniform DIF. In other words, ICCs have various shapes for different groups of examinees in non-uniform DIF. Non-uniform DIF influences examinees inconsistently. De Beer (2004, p. 42) states that “the ICC shapes cross at a given point implying that one group has a lesser possibility to answer the test items accurately while such possibility for the other group was still higher”. Fig. 2 shows the ICC shape for an item demonstrating the non-uniform DIF. 

Figure 2: Non-uniform DIF item (Adopted from De Beer, 2004)

A DIF analysis for test takers with various language backgrounds encompassing Chinese and Spanish was examined by Chen and Henning (1985). They employed Transformed Item Difficulty (TID) presented first by Angoff (1993). TID provides the item difficulty indices between two groups of test takers and identified outliers. One hundred eleven test takers including seventy-seven Chinese and thirty-four Spanish test takers took part in the research. Nevertheless, the participants were not that much ample for the difficulty parameter to be consistently measured. Lawrence, Curley, & McHale (1988) and Lawrence & Curley (1989) studied DIF regarding students’ gender in the Scholastic Aptitude Test (SAT) by dint of the standardization method. The outcomes depicted females performed not as well on items as male test takers. All these studies, though, have some downsides. First, most of them dealt with finding DIF (uniform and non-uniform) considering item discrimination. Furthermore, most studies conducted on comparing the students’ total scores through standardization processes have shown that items are not typically examined before DIF detection. This may jeopardize the results of the studies. Ownby and Waldrop-Valverde (2013) applied IRT to determine whether the way the participants respond to the items has any influence on older readers in a cloze test. They spotted twenty four items flagging DIF, concluding that DIF was a substantial cause of variance that may imperil test scores’ interpretations and uses. Koo (2014) conducted meta-analytic DIF analyses on a reading test and the Florida Comprehensive Achievement Test (FCAT) by taking language, gender, and ethnicity into account.  He figured out that items of vocabulary and phraseology favored non-English language learners irrespective of their gender and ethnicity. Aryadoust and Zhang (2015) utilized a Rasch model to a test of reading comprehension in a Chinese context. They found that while class one performed better on vocabulary, grammar, and general English proficiency, the other class surpassed in skimming and scanning parts. The results of most prior studies showed the gender had a trivial impact on the performance of the readers (Hong & Min, 2007Chen & Jiao, 2014). Federer, Nehm, & Pearl (2016) explored the correlation between the way male and female participants while answering the open-ended questions. They found that women performed better under novel circumstances. In another study focusing on evolution, Smith (2016) made an instrumentation dealing with the Evolution Theory. He could succeed to make a distinction between high school and university students using items flagging DIF.

The Current Study

The present paper aimed at finding and identifying the items that were susceptible to DIF as well as determining the fields of study which were advantaged in those items. Most DIF investigations are based upon the comparisons between gender (e.g., Lawrence, Curley, & McHale, 1988Carlton, 1992Federer et al., 2016), ethnicity (Schmitt, 1990Koo, 2014), or language (Chen & Henning, 1985Ryan & Bachman, 1992) to-date. There are insufficient studies which scrutinized DIF for students with different subject fields focusing on PTE as an international proficiency test. Thus, DIF detection for students with different subject fields (Engineering vs. Sciences) willing to participate in PTE, worth investigating. The main objective of this paper was to detect questions displaying DIF on PTE proficiency test for test takers with different fields of study (Engineering vs. Sciences) by means of IRT analysis. To the end, two research questions motivated this study:           

RQ1:Do test items (PTE test) function differently for test takers with different fields of study (Engineering vs. Sciences)?

RQ2: Are there linguistic features of these items that account for the DIF results?

Methodology

Participants

This study included 250 intermediate EFL learners with the age range of 26 -36. They were Ph.D. applicants as well as Master’s degree holders in two different fields of study (125 Engineering) and 125 Sciences) in Iran. All the participants spoke Persian / Farsi as their L1.

Instruments

In line with the purposes of the study, the researchers applied one instrument as follows:

Pearson Test of English (PTE)

Pearson Language Tests is devoted to measuring and validating the English language of non-native English speakers. The tests comprise the Pearson Test of English (PTE) Academic, PTE General and PTE Young Learners. These are administered in association with Edexcel, the world's largest examining body. In 2009, Pearson Language Tests introduced the Pearson Test of English Academic which is recognized by Graduate Management Admission Council (GMAC). The test score has been associated to the levels well-defined in the Common European Framework of Reference for Languages (CEFR). PTE Academic is distributed through the Pearson Virtual User Environment (VUE) centers which are also in charge of holding the GMAT (Graduate Management Admission Test). Upon publicizing, it was accepted by nearly 6,000 organizations. As a case in point, the test is accepted by the Australia Border Agency and the Australian Department of Immigration and Citizenship for visa applications. The test is mostly read by a computer rather than a human corrector to decrease waiting times of the results for students.

 

      Table 1. Detailed pattern of PTE:        

            

Data Collection Procedures   

The researchers requested the PTE candidates to provide them with report card of their score in each section as well as the total scores. In addition to this, the scores of each item were collected and used for the purpose of data analysis. The scores for each part had been estimated based on the correct responses and no negative marks had been considered for wrong answers. During the administration of the PTE test, the usual precautions were met:

  1. Strict administration procedures were followed to minimize the effects of external factors like cheating, etc.
  2. For any form of ID to be acceptable it will need to be a valid document (not expired) or its issue date no more than 10 years old.
  3. The same ID details shared when booking the test must be presented by the test taker on the day of the test.
  4. The name on the ID should exactly match the name used when booking the test.
  5. If you fail to produce the required ID you will not be allowed into the test room and will lose your test fee.
  6. Copies will not be accepted. The original document must be provided. No other ID will be accepted at the test center.

Design 

In view of the fact the researchers couldn’t manipulate and control the independent variables, the design of this study was ex post facto as already confirmed by Hatch and Farhady (1982). Such design is normally utilized when there is no interference on the part of the researchers on the participants’ traits. This study comprised the test-takers’ subject fields as an independent variable and their PTE test scores as the dependent variable. 

Data Analysis Procedures

The PTE scored items of two hundred and fifty Iranian EFL test takers were entered into the IRT 3PL model suggesting the probability that a test taker with an ability of theta (θ) responds to an item accurately, with regard to item difficulty (b parameter), item discrimination (a parameter), and pseudo-guessing (c parameter) (Hambleton, Swaminathan, & Rogers, 1991). These characteristics are mathematically shown hereunder: 

(= 1/θ) = c +   1 – c 

                                             1 + e–

(θ–b)

Where, is an item response, θ is the estimated ability, is item discrimination, is item difficulty, is pseudo-guessing parameter, is a scaling factor (= 1.7) that is devised to estimate the IRT models to a cumulative normal curve, and is a transcendental number whose value is 2.718. However, because the parameter is often poorly assessed, a prior distribution (M = 0.2 and SD = 1, according to Thissen (1991) has been applied. Thissen, Steinberg, & Wainer (1988) proposed that a prior speculation is applied to the parameters when DIF is studied using the 3PL IRT model. The IRT LR is a model-based approach and compares a model in which all parameters are controlled to be equal across groups, hence no DIF, with an amplified model, permitting parameters to be free across groups. Using the likelihood ratio goodness-of-fit statistic, , the fit of each model to the data is estimated. Statistical difference in G² between the two models were also tested based on the chi-square statistics. Then, item discrimination (i.e., parameter), item difficulty (i.e., parameter), and G² were measured by means of probability ratio of chi-square statistics. If parameter is constant, it confirms unchanging, uniform DIF or no DIF. If the result is significant (i.e., variant b parameter), it designates uniform DIF. On the other hand, if parameter of the studied items is variant, it proves the presence of non-uniform DIF in spite of the parameters. 

Results

The Outcomes of Research Question 

The results of DIF investigations on IRT 3P LR model are shown in Tables 2, 3, and 4. These Tables depict the following data: 

  1. (b) standing for Item Difficulty
  2. (a) standing for Item Discrimination
  3. (c) revealing Guessing 
  4. (G2) revealing Likelihood Ratio Goodness-of-fit 
  5. (X2) representing Chi-square 
  6. (P) representing the Probability or Test of Significance

Speaking & Writing

This part included 38 - 57 items. To have clear understanding and detailed and reliable calculations, in this study 57 questions are considered for the Speaking and Writing part, which is the utmost number of items in PTE Speaking and Writing part. This is actually applied for other parts of the test too). To detect/identify DIF, each item was analyzed with respect to 3PL IRT model. To do this, as Thissen, Steinberg, & Wainer (1988) confirmed, the impacts of parameter were controlled in advance. As it is shown in Table 2, twelve items (4, 6, 7, 13, 17, 29, 34, 38, 46, 47, 52 and 53) were identified to show DIF at the 0.05 significance level. Two items (i.e., items 7 and 17) displayed no DIF, and four items (i.e., items 4, 6, 13, 29, 50, 55 and 57) exhibited non-uniform DIF. 

Table 2: Speaking and Writing

Reading 

This part included 20 items. To detect/identify DIF, each item was scrutinized with respect to 3PL IRT model. The plausible effects of parameter were controlled in advance, as recommended by Thissen, Steinberg, & Wainer (1988). As Table 3 indicates, five items (3, 10, 13, 14 and 15) were found to depict DIF at the 0.05 significance level. 

Table 3: Reading

Listening 

This section includes 25 items. To detect/identify DIF, each item was investigated with respect to 3PL IRT model while the probable effects of parameter were controlled in advance as per  Thissen, Steinberg, & Wainer’s (1988) recommendations. As it is shown in Table 4, four items (1, 14, and 20) were recognized to show DIF at the 0.05 significance level.

Table 4: Listening 

Comparing two groups based on Descriptive Statistics 

To discover which group (Engineering vs. Sciences) performed better at the exam in each part and the whole test, the independent samples t-test for comparison of means in two groups has been carried out. As Tables 5 and 6 illustrates, the mean score of Science test takers in Listening section (10.36) is higher than the Engineering test takers (9.33). However, the difference is not significant at 0.05 level. Regarding Speaking and Writing (S & W), as shown in Tables 5 and 6, the mean score of Science test takers (14.89) is higher than the Engineering’ (10.69). Such difference is significant at 0.05 level. In regard to Reading, as it is illustrated in Tables 5 and 6, the mean score of Science test takers (19.94) is higher than that of Engineering (15.55). However, the distinction is not significant at 0.05 level. As for the Total test, as Tables 5 and 6 demonstrate, by considering the mean score of Science test takers (45.52) and the standard deviation (SD=11.11) and comparing them with those of Engineering (35.55); (SD= 13.38), it turned out that Science test takers outperformed the Engineering. It can be inferred that the exam was statistically easier for Science test takers at 0.05 level.

Table 5: Descriptive statistics for the Comparison of Two Groups (Engineering vs. Sciences) in Three Parts of PTE
Table 6: Independent sample t-test for comparing two groups (Engineering vs. Sciences) in each part of the exam as well as the whole test 

In the meantime, the descriptive statistics and reliability estimates are also given in Table 7 for data sample (n = 250) results on the PTE total test as well as its three sections. As presented in Table 7, the PTE Test has been proved to be a quite reliable test. The reliability for the whole PTE test as well as Listening, Speaking & Writing and Reading parts were .95, .88, .82 and .93 respectively. 

Table 7: Reliability Estimate Analyses 

 

Qualitative Results

Finding and removing DIF items are significant for test fairness and validity. It’s vital to guarantee that latent traits of all test-takers are determined precisely by items and test scores. Although PTE test has experienced severe vicissitudes and revisions since its development, both test-takers and test-developers still doubt whether the test is fair for all groups of individuals. To address such obscurities, the present study applied IRT 3PL model to PTE proficiency exam to distinguish items flagging DIF. The criterion variables were Listening, Speaking & Writing, Reading, and examinees’ academic field of study. Findings depict that items in different parts might be associated to some features of individuals and may therefore create bias in assessing their proficiency. Nevertheless, such inconsistencies were not that much great, denoting that the difficulty level of items was not the same for two groups of examinees in different fields of study. As already confirmed by Zumbo (2007), these discrepancies among examinees’ performance may be linked to some prevailing covariates. In this study, almost twenty percent of the original questions ultimately flagged as items showing differential item functioning. They need to be discarded from the test’s next administration. These findings oppose with the general international results proposed by McBride (1997). He believes one third of original items needs to be deleted in any test. The findings of this research are in line with earlier studies where speaking, vocabulary, listening and reading were found to cause disparities among examinees’ performance and caused DIF (Grabe, 2009Koda, 2005). Tittle (1982) and Clauser (1990) suggest such items might cause the target group to be less inspired on the exam. Simultaneously, there are other unknown sources that may cause DIF. In light of the fact that DIF is usually scrutinized when comparing innumerable groups of students is concerned, a big DIF value illustrates the presence of extra construct that may lead to the alterations/distinctions among the test takers. All in all, it is highly recommended that the test-developers utilize DIF analysis as a significant aspect of their programs to augment the assessment procedures. Mixing statistical analysis with the researchers’ knowledge and skills might help the test developers realize whether DIF tagged items are fair or not. 

Conclusion

With regard to the findings of this study, it can be taken as read that when data was divided according to variable under study, different variations of variables arose. In this study, twenty out of 91 items have been detected as items flagging DIF. As a general finding, those test takers whose academic major was Science outperformed the Engineering students especially in S&W and Reading sections. S&W and Reading play pivotal role in any language proficiency test and are therefore substantial to dedicate further time and energy in learning context to teach these parts more systematically. Learners should be assisted to have a better appreciation of the implication and importance of these factors and do their best to ameliorate in these skills. This study has some implications for PTE test developers and those who take the test.  The former are highly recommended to conduct more studies to identify the items that may flag DIF and take care of the researchers’ findings in this regard, and the latter can be guaranteed that the test scores are not favored against any specific type of examinees. Nonetheless, given that the gender is also a contributing factor, it is recommended to perform a post hoc study to inspect the influence of gender variable and find the items that cause DIF owing to that variable. Another point worth suggesting for future studies is to contemplate how other variables consisting of participants’ background knowledge, test wise-ness, L1, culture, etc. would disclose more information about the items showing DIF. The IRT model permits the researchers to access to a noticeable explanation of bias that is convenient to realize and construe. The outcomes of this study help test-developers to distinguish sources of bias. It is vital to recap that test developers’ decisive interests may place in the kind of decisions that are made based on test’s scores as test takers’ conditions depend upon such verdicts in future either partially or impartially. Recent methods in psychometric analysis are proposed to be established and applied in further studies as new novelties might permit the researchers to do experimental investigations and it may upsurge the accuracy of measurement. 

List of Abbreviations:

DIF: Differential Item functioning 

IRT: Item Response Theory

PTE: Pearson Test of English

LR: Likelihood Ratio 

ICC: Item Characteristic Curve

TID: Transformed Item Difficulty 

SAT: Scholastic Aptitude Test

FCAT: Florida Comprehensive Achievement Test 

GMAC: Graduate Management Admission Council

CEFR: Common European Framework of Reference for Languages

VUE: Virtual User Environment 

GMAT: Graduate Management Admission Test

References

Clearly Auctoresonline and particularly Psychology and Mental Health Care Journal is dedicated to improving health care services for individuals and populations. The editorial boards' ability to efficiently recognize and share the global importance of health literacy with a variety of stakeholders. Auctoresonline publishing platform can be used to facilitate of optimal client-based services and should be added to health care professionals' repertoire of evidence-based health care resources.

img

Virginia E. Koenig

Journal of Clinical Cardiology and Cardiovascular Intervention The submission and review process was adequate. However I think that the publication total value should have been enlightened in early fases. Thank you for all.

img

Delcio G Silva Junior

Journal of Women Health Care and Issues By the present mail, I want to say thank to you and tour colleagues for facilitating my published article. Specially thank you for the peer review process, support from the editorial office. I appreciate positively the quality of your journal.

img

Ziemlé Clément Méda

Journal of Clinical Research and Reports I would be very delighted to submit my testimonial regarding the reviewer board and the editorial office. The reviewer board were accurate and helpful regarding any modifications for my manuscript. And the editorial office were very helpful and supportive in contacting and monitoring with any update and offering help. It was my pleasure to contribute with your promising Journal and I am looking forward for more collaboration.

img

Mina Sherif Soliman Georgy

We would like to thank the Journal of Thoracic Disease and Cardiothoracic Surgery because of the services they provided us for our articles. The peer-review process was done in a very excellent time manner, and the opinions of the reviewers helped us to improve our manuscript further. The editorial office had an outstanding correspondence with us and guided us in many ways. During a hard time of the pandemic that is affecting every one of us tremendously, the editorial office helped us make everything easier for publishing scientific work. Hope for a more scientific relationship with your Journal.

img

Layla Shojaie

The peer-review process which consisted high quality queries on the paper. I did answer six reviewers’ questions and comments before the paper was accepted. The support from the editorial office is excellent.

img

Sing-yung Wu

Journal of Neuroscience and Neurological Surgery. I had the experience of publishing a research article recently. The whole process was simple from submission to publication. The reviewers made specific and valuable recommendations and corrections that improved the quality of my publication. I strongly recommend this Journal.

img

Orlando Villarreal

Dr. Katarzyna Byczkowska My testimonial covering: "The peer review process is quick and effective. The support from the editorial office is very professional and friendly. Quality of the Clinical Cardiology and Cardiovascular Interventions is scientific and publishes ground-breaking research on cardiology that is useful for other professionals in the field.

img

Katarzyna Byczkowska

Thank you most sincerely, with regard to the support you have given in relation to the reviewing process and the processing of my article entitled "Large Cell Neuroendocrine Carcinoma of The Prostate Gland: A Review and Update" for publication in your esteemed Journal, Journal of Cancer Research and Cellular Therapeutics". The editorial team has been very supportive.

img

Anthony Kodzo-Grey Venyo

Testimony of Journal of Clinical Otorhinolaryngology: work with your Reviews has been a educational and constructive experience. The editorial office were very helpful and supportive. It was a pleasure to contribute to your Journal.

img

Pedro Marques Gomes

Dr. Bernard Terkimbi Utoo, I am happy to publish my scientific work in Journal of Women Health Care and Issues (JWHCI). The manuscript submission was seamless and peer review process was top notch. I was amazed that 4 reviewers worked on the manuscript which made it a highly technical, standard and excellent quality paper. I appreciate the format and consideration for the APC as well as the speed of publication. It is my pleasure to continue with this scientific relationship with the esteem JWHCI.

img

Bernard Terkimbi Utoo

This is an acknowledgment for peer reviewers, editorial board of Journal of Clinical Research and Reports. They show a lot of consideration for us as publishers for our research article “Evaluation of the different factors associated with side effects of COVID-19 vaccination on medical students, Mutah university, Al-Karak, Jordan”, in a very professional and easy way. This journal is one of outstanding medical journal.

img

Prof Sherif W Mansour

Dear Hao Jiang, to Journal of Nutrition and Food Processing We greatly appreciate the efficient, professional and rapid processing of our paper by your team. If there is anything else we should do, please do not hesitate to let us know. On behalf of my co-authors, we would like to express our great appreciation to editor and reviewers.

img

Hao Jiang

As an author who has recently published in the journal "Brain and Neurological Disorders". I am delighted to provide a testimonial on the peer review process, editorial office support, and the overall quality of the journal. The peer review process at Brain and Neurological Disorders is rigorous and meticulous, ensuring that only high-quality, evidence-based research is published. The reviewers are experts in their fields, and their comments and suggestions were constructive and helped improve the quality of my manuscript. The review process was timely and efficient, with clear communication from the editorial office at each stage. The support from the editorial office was exceptional throughout the entire process. The editorial staff was responsive, professional, and always willing to help. They provided valuable guidance on formatting, structure, and ethical considerations, making the submission process seamless. Moreover, they kept me informed about the status of my manuscript and provided timely updates, which made the process less stressful. The journal Brain and Neurological Disorders is of the highest quality, with a strong focus on publishing cutting-edge research in the field of neurology. The articles published in this journal are well-researched, rigorously peer-reviewed, and written by experts in the field. The journal maintains high standards, ensuring that readers are provided with the most up-to-date and reliable information on brain and neurological disorders. In conclusion, I had a wonderful experience publishing in Brain and Neurological Disorders. The peer review process was thorough, the editorial office provided exceptional support, and the journal's quality is second to none. I would highly recommend this journal to any researcher working in the field of neurology and brain disorders.

img

Dr Shiming Tang

Dear Agrippa Hilda, Journal of Neuroscience and Neurological Surgery, Editorial Coordinator, I trust this message finds you well. I want to extend my appreciation for considering my article for publication in your esteemed journal. I am pleased to provide a testimonial regarding the peer review process and the support received from your editorial office. The peer review process for my paper was carried out in a highly professional and thorough manner. The feedback and comments provided by the authors were constructive and very useful in improving the quality of the manuscript. This rigorous assessment process undoubtedly contributes to the high standards maintained by your journal.

img

Raed Mualem

International Journal of Clinical Case Reports and Reviews. I strongly recommend to consider submitting your work to this high-quality journal. The support and availability of the Editorial staff is outstanding and the review process was both efficient and rigorous.

img

Andreas Filippaios

Thank you very much for publishing my Research Article titled “Comparing Treatment Outcome Of Allergic Rhinitis Patients After Using Fluticasone Nasal Spray And Nasal Douching" in the Journal of Clinical Otorhinolaryngology. As Medical Professionals we are immensely benefited from study of various informative Articles and Papers published in this high quality Journal. I look forward to enriching my knowledge by regular study of the Journal and contribute my future work in the field of ENT through the Journal for use by the medical fraternity. The support from the Editorial office was excellent and very prompt. I also welcome the comments received from the readers of my Research Article.

img

Dr Suramya Dhamija

Dear Erica Kelsey, Editorial Coordinator of Cancer Research and Cellular Therapeutics Our team is very satisfied with the processing of our paper by your journal. That was fast, efficient, rigorous, but without unnecessary complications. We appreciated the very short time between the submission of the paper and its publication on line on your site.

img

Bruno Chauffert

I am very glad to say that the peer review process is very successful and fast and support from the Editorial Office. Therefore, I would like to continue our scientific relationship for a long time. And I especially thank you for your kindly attention towards my article. Have a good day!

img

Baheci Selen

"We recently published an article entitled “Influence of beta-Cyclodextrins upon the Degradation of Carbofuran Derivatives under Alkaline Conditions" in the Journal of “Pesticides and Biofertilizers” to show that the cyclodextrins protect the carbamates increasing their half-life time in the presence of basic conditions This will be very helpful to understand carbofuran behaviour in the analytical, agro-environmental and food areas. We greatly appreciated the interaction with the editor and the editorial team; we were particularly well accompanied during the course of the revision process, since all various steps towards publication were short and without delay".

img

Jesus Simal-Gandara

I would like to express my gratitude towards you process of article review and submission. I found this to be very fair and expedient. Your follow up has been excellent. I have many publications in national and international journal and your process has been one of the best so far. Keep up the great work.

img

Douglas Miyazaki

We are grateful for this opportunity to provide a glowing recommendation to the Journal of Psychiatry and Psychotherapy. We found that the editorial team were very supportive, helpful, kept us abreast of timelines and over all very professional in nature. The peer review process was rigorous, efficient and constructive that really enhanced our article submission. The experience with this journal remains one of our best ever and we look forward to providing future submissions in the near future.

img

Dr Griffith

I am very pleased to serve as EBM of the journal, I hope many years of my experience in stem cells can help the journal from one way or another. As we know, stem cells hold great potential for regenerative medicine, which are mostly used to promote the repair response of diseased, dysfunctional or injured tissue using stem cells or their derivatives. I think Stem Cell Research and Therapeutics International is a great platform to publish and share the understanding towards the biology and translational or clinical application of stem cells.

img

Dr Tong Ming Liu

I would like to give my testimony in the support I have got by the peer review process and to support the editorial office where they were of asset to support young author like me to be encouraged to publish their work in your respected journal and globalize and share knowledge across the globe. I really give my great gratitude to your journal and the peer review including the editorial office.

img

Husain Taha Radhi

I am delighted to publish our manuscript entitled "A Perspective on Cocaine Induced Stroke - Its Mechanisms and Management" in the Journal of Neuroscience and Neurological Surgery. The peer review process, support from the editorial office, and quality of the journal are excellent. The manuscripts published are of high quality and of excellent scientific value. I recommend this journal very much to colleagues.

img

S Munshi

Dr.Tania Muñoz, My experience as researcher and author of a review article in The Journal Clinical Cardiology and Interventions has been very enriching and stimulating. The editorial team is excellent, performs its work with absolute responsibility and delivery. They are proactive, dynamic and receptive to all proposals. Supporting at all times the vast universe of authors who choose them as an option for publication. The team of review specialists, members of the editorial board, are brilliant professionals, with remarkable performance in medical research and scientific methodology. Together they form a frontline team that consolidates the JCCI as a magnificent option for the publication and review of high-level medical articles and broad collective interest. I am honored to be able to share my review article and open to receive all your comments.

img

Tania Munoz

“The peer review process of JPMHC is quick and effective. Authors are benefited by good and professional reviewers with huge experience in the field of psychology and mental health. The support from the editorial office is very professional. People to contact to are friendly and happy to help and assist any query authors might have. Quality of the Journal is scientific and publishes ground-breaking research on mental health that is useful for other professionals in the field”.

img

George Varvatsoulias

Dear editorial department: On behalf of our team, I hereby certify the reliability and superiority of the International Journal of Clinical Case Reports and Reviews in the peer review process, editorial support, and journal quality. Firstly, the peer review process of the International Journal of Clinical Case Reports and Reviews is rigorous, fair, transparent, fast, and of high quality. The editorial department invites experts from relevant fields as anonymous reviewers to review all submitted manuscripts. These experts have rich academic backgrounds and experience, and can accurately evaluate the academic quality, originality, and suitability of manuscripts. The editorial department is committed to ensuring the rigor of the peer review process, while also making every effort to ensure a fast review cycle to meet the needs of authors and the academic community. Secondly, the editorial team of the International Journal of Clinical Case Reports and Reviews is composed of a group of senior scholars and professionals with rich experience and professional knowledge in related fields. The editorial department is committed to assisting authors in improving their manuscripts, ensuring their academic accuracy, clarity, and completeness. Editors actively collaborate with authors, providing useful suggestions and feedback to promote the improvement and development of the manuscript. We believe that the support of the editorial department is one of the key factors in ensuring the quality of the journal. Finally, the International Journal of Clinical Case Reports and Reviews is renowned for its high- quality articles and strict academic standards. The editorial department is committed to publishing innovative and academically valuable research results to promote the development and progress of related fields. The International Journal of Clinical Case Reports and Reviews is reasonably priced and ensures excellent service and quality ratio, allowing authors to obtain high-level academic publishing opportunities in an affordable manner. I hereby solemnly declare that the International Journal of Clinical Case Reports and Reviews has a high level of credibility and superiority in terms of peer review process, editorial support, reasonable fees, and journal quality. Sincerely, Rui Tao.

img

Rui Tao

Clinical Cardiology and Cardiovascular Interventions I testity the covering of the peer review process, support from the editorial office, and quality of the journal.

img

Khurram Arshad