
Reliability testing is important in the validation phase of research instruments, especially for a quantitative pilot study in the social science field. Software like SPSS has become the standard software for pilot studies as it allows researchers to obtain Cronbach’s Alpha values and item-total correlation diagnostics rooted in Classical Test Theory. These tools are intended to analyse internal consistency and improve the questionnaire items’ clarity before the researchers proceed to full-scale data collection. Nevertheless, with the increasing popularity of PLS-SEM and software advancement, most amateur researchers have turned to the use of SmartPLS, although they are still in the pilot study phase. This is because SmartPLS has more sophisticated metrics complemented by its user-friendly visual interface. This paper argues that SmartPLS, although very powerful for theory testing and structural models’ validation, is very unsuitable for conducting reliability tests at an early stage. In contrast to SPSS, SmartPLS assumes a well-defined measurement model, thus neglects critical diagnostics such as item-total statistics, which are very important procedures for identifying weak items. Since SmartPLS assumes that the model structure is well-defined, it directly provides metrics like Composite Reliability and Average Variance Extracted, which, while very meaningful, can be misleading when applied to small pilot study datasets, as it lacks construct maturity. Therefore, this paper reveals how reliance on SmartPLS during a pilot study can lead to false confidence in the model’s adequacy, potentially affecting instrument development. This paper presents an important justification for selecting the appropriate tools by underscoring the strengths and weaknesses of SPSS and SmartPLS for a pilot study. It confirms that SPSS is appropriate for all purposes in instrument development while also noting the caution against using SmartPLS prematurely. Researchers are encouraged to validate their choice of software against the specific research phase to prioritise methodological rigour and relevant outcomes.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
