查看原文
其他

研究方法|研究设计:如何在顶级管理学期刊发表论文(二)

Joyce E. Bono等 中国高等院校市场学研究会 2023-02-02


转自 南开管理评论


作为博士教育的一部分,大多数学者会选修研究方法课程,学习优秀研究设计的基本知识,包括设计应以提出的问题为驱动力,并应避免对有效性的威胁。因此,我们对研究性设计的讨论几乎没有新意。相反,我们关注的是导致AMJ投稿被拒的常见设计问题。研究人员在设计研究时面临的实际问题是(a)没有硬性和直接的规则可应用;将研究设计与研究问题相匹配,既是一门艺术,也是一门科学;(b)外部因素有时会限制研究人员进行最佳设计的能力(McGrath,1981)。


对管理学者来说,获取组织、组织中的人员和有关组织的丰富数据是一个重大的挑战,但是如果这些约束成为设计决策的核心驱动力,那么结果就是一篇文章,其中包含了对结果的许多似是而非的、可被替代的解释,最终导致拒稿和浪费大量的时间、精力和金钱。选择合适的设计对AMJ手稿的成功至关重要,部分原因是在修改过程中,研究的基本设计不能改变。在研究设计过程中所做的决定最终会影响到读者对研究结论的信任程度、研究结果对研究者论点的验证程度以及其他解释的可被低估的程度。在回顾过去一年被AMJ拒绝的文章时,我们发现了三个广泛的设计问题,它们是拒稿的常见来源:(a)研究问题和研究设计之间不匹配,(b)测量和操作问题(即结构有效性),以及(c)不适当或不完整的模型规范。



研究问题与研究设计的匹配

横截面数据。横截面数据的使用是AMJ微观和宏观研究中常见的拒稿原因。拒稿并不是因为这些数据本身就有缺陷,也不是因为审稿专家或编辑对这些数据有偏见。之所以会这样,是因为许多(也许是大多数)管理研究问题都是隐性地来处理“变化”的问题——即使没有这样的框架。横截面数据的问题是,它们与隐性或显性处理因果关系或变化的研究问题不匹配,对这些问题的有力测试要求对某个变量进行多次测量,或对随后与另一个变量相连的一个变量进行操作。例如,研究诸如组织领导力变化对公司投资模式的影响,CEO或TMT股票期权对公司行为的影响,或行业结构变化对行为的影响等问题,都隐含着因果关系和变化。同样,当研究人员假设管理行为会影响员工的动机,人力资源管理实践会减少离职率,或者性别刻板观念会限制女性管理者的晋升时,他们也在潜在地测试变化,因此无法使用横截面数据进行充分的测试,无论这些数据是否来自现有数据库或通过员工调查收集的数据库。研究人员根本无法用横截面数据建立强有力的因果关系,也无法建立变化,无论他们使用何种分析工具。相反,需要纵向、面板或实验数据来推断变化或建立强有力的因果推论。例如,Nyberg、Fulmer、Gerhart和Carpenter(2010年)创建了一组数据,并使用固定效应回归来建模CEO—股东财务一致性(alignment)对未来股东回报的影响程度。这种数据结构允许研究人员控制跨公司异质性,并适当地模拟公司内部一致性的变化如何影响股东回报。


我们的观点不是贬低横截面数据的潜在有用性。相反,我们指出了认真做好研究设计和研究问题匹配的重要性,这样一个研究或一组研究就能够胜任测试感兴趣的问题。研究者应该在设计阶段问自己,他们的基本问题是否可以用他们选择的设计来回答。如果问题涉及变量之间的变化或因果关系(任何中介研究都意味着因果关系),那么横截面数据是一个糟糕的选择。


不适当的样本和程序许多组织研究,包括发表在AMJ上的研究,都使用方便的样本、模拟的业务情况或人工任务。从设计的角度来看,问题是样本和程序是否适合所研究的问题。要求工作经验有限的学生参加他们做出执行决定的实验研究,可能不是检验性别刻板观念对男女管理者反应影响的适当方法。但是,让这些学生参加一个基于情境的实验,在这个实验中,他们选择了他们希望为之工作的经理,这可能是一个很好的样本和研究问题之间契合的安排。举例来说,关于研究问题与样本匹配的概念,是一项关于基于权益的薪酬评估的研究,其中Devers、Wiseman和Holmes(2007)使用了一个高管MBA学生的样本,几乎所有人都有权变薪酬的经验。在选择一个样本时,同样也要注意在匹配程序中进行,以研究问题。如果一项研究涉及一个展开的场景,其中一名被试在一段时间内做出一系列决定,对这些决定的反馈做出反应,那么随着时间的推移,通过收集数据,而不是在一个45分钟的实验室会议中包含一系列决定和反馈点,研究人员才会事半功倍。


我们的观点并不是说某些样本(如高管或学生)或程序天生就比其他样本好。事实上,在AMJ,我们明确鼓励实验研究,因为这是解决因果关系问题的一个极好的方法,而且我们认识到,重要的问题,特别是那些涉及心理过程的问题,往往可以同等地被大学生或组织员工很好地回答(见AMJ2008年8月的编者按,第51卷:616-620)。我们会问作者,他们的研究是在实验室还是在现场进行的,他们的样本和程序与他们的研究问题是否匹配,并在作品中清楚地说明为什么这些样本或程序是适当的。 



测量和操作

研究人员一旦开始操作化构建,往往会想到有效性,但这可能为时已晚。在做出操作决策之前,开发新构念的作者必须清楚地阐明新构念的定义和边界,映射其与现有构念的关联,避免假设相同名称反映同一个构念而不同名称反映不同构念(即形似实异的谬误(jingle-jangle)[Block,1995])。未能定义核心构念常常导致文章中的不一致。例如,在撰写一篇论文时,作者可能最初只关注一个构念,如组织合法性,但后来会根据不同但相关的构念(如声誉或地位)进行讨论。在这种情况下,外审专家无法清楚地理解预期的构念或其理论意义。尽管发展理论并不是研究设计的一个特定组成部分,但手稿的读者和审稿人应该能够清楚地理解一个构念的概念意义,并看到它被适当测量的证据。


对现有测量方法的不当调整。对于收集实地数据的研究人员来说,一个关键的挑战是让组织和管理人员配合,调查长度常常是一个值得关注的问题。减少调查长度的一个简单方法是消除项目。然而,当研究人员从现有的量表中挑选项目(或者重写它们以更好地反映其独特的背景)而没有提供支持有效性的证据时,就会出现问题。有几种方法可以解决这个问题。首先,如果一篇文章包括新的(或实质性改变的措施),所有题项都应该包括在文章中,通常是在附录中。这有利于审稿人检查新方法的直观(face)有效性。其次,作者可以在一个子样本或完全不同的样本中包含这两个测量方案(原始的和缩短的版本),以证明它们之间的高收敛有效性。更好的方法是在法理上包含其他几个关键变量,以证明新的或改变的测量与其他相似和不同的构念相关。


现有测量方法的不当运用。另一种向评审者发出危险信号的方法是使用现有的测量来评估完全不同的构念。我们看到这个问题尤其发生在大型数据库的用户中。例如,如果先前的研究采取了一个行动,如改变形式(例如,由一家餐厅)作为战略改变的衡量标准,而其后的一篇论文使用了同样的行动(改变形式)作为组织搜索的衡量标准,那么我们对作者衡量其预期构念就很没信心。鉴于研究过程的累积性和渐进性,作者必须确定其新构念的唯一性、与现有构念的关系以及其可操作化的有效性。


共同方法偏差。我们看到许多被拒绝的AMJ文章,其中的数据不仅是横截面的,而且还通过一种共同方法进行评估(例如,一项调查将由一个人完成多个预测和标准变量)。共同方法偏差对观测相关性的解释构成严重威胁,因为这种相关性可能是由于测量方法(包括评分者效应、项目效应或背景效应)引起的系统错误偏差的结果。Podsakoff、MacKenzie、Lee和Podsakoff(2003)详细讨论了共同方法偏差,并提出了减少其偏差效应的方法(另见Conway和Lance,2010)。


AMJ文章中关键变量的测量和操作问题的含义远远超出了心理测量学。在概念层面上,对关键变量的草率和不精确的定义和操作威胁着从研究中得出的推论。如果底层结构的性质和度量没有很好地建立起来,读者对作者实际上测试了他们提出的模型几乎没有信心,而理性的外审可以为结果找到多个合理的解释。作为一个实际问题,不精确的操作定义和概念定义也使得难以定量地汇总研究结果(即进行元分析)。



模型制定

制定一个理论模型的挑战之一,是实际上不可能包括每一个可能的控制变量和中介过程,因为使用的数据库中可能不存在相关变量,或者因为组织限制了调查的长度。然而,在设计阶段对关键控制变量和中介过程的细致处理可以在审稿过程中提供可观的回报。


适当包含控制变量。适当的控制措施可以让研究人员从他们的研究中得出更明确的结论。研究可能会在控制措施太少或太多的情况下出错。控制变量应满足纳入研究的三个条件(Becker,2005;James,1980)。第一,由于明确的理论联系或先前的实证研究,人们强烈期望变量与因变量相关。第二,有一个强烈的期望,即控制变量与假设的自变量相关。第三,有一个逻辑上的原因,即控制变量在研究中不是一个更为中心的变量,无论是假设的还是中介的。如果一个符合这三个条件的变量被排除在研究之外,结果可能会有遗漏的变量偏差。然而,如果包含的控制变量不符合这三个测试,它们可能会包含过多自由度或偏向假设变量相关的结果(增加I型或II型错误),从而妨碍研究(Becker,2005)。因此,研究人员应该仔细考虑他们包括的控制变量,确保包括适当的控制变量,但排除多余的控制变量。


操作性中介。AMJ中的文章的一个独特特点是,它们被期望测试、构建或扩展理论,这种期望一般通过采取解释为什么一组变量是相关的形式来满足。但光靠理论是不够的,还必须对中介过程进行经验检验。模型中何时应该包含中介(以及哪些中介)的问题需要在设计阶段解决。当一个研究领域是新的,重点可能是建立两个变量之间的因果关系。但是,一旦建立了联系,研究人员就必须清楚地描述和衡量变量A影响变量B的过程。随着研究领域的成熟,可能需要包括多个中介。例如,变革型领导文献的一个优点是研究了许多中介过程(例如,LMX[Kark,Shamir & Chen,2003;Pillai,Schriesheim& Williams,1999;Wang,Law,Hackett,Wang,&Chen,2005]),但这一系列文献的一个弱点是,大多数中介变量,即使它们在概念上相互关联,但都是被单独研究的。通常,每一个都被视为管理行为影响员工态度和行为的独特过程,而其他已知的中介因素则不被考虑。如果不能评估已知的和概念上相关的中介,作者就很难说服外审他们的贡献是新颖的。



结论

尽管研究方法随着时间的推移而不断发展,但好的研究设计的基本原则几乎没有什么变化:将你的设计与你的问题相匹配,将构念定义与操作相匹配,仔细制定你的模型,使用具备构念有效性的测量方法或提供此类证据,选择适合你独特研究问题的样本和程序。AMJ投稿的设计问题被否定的核心原因不是它们是经过良好设计的研究,而后在执行过程中遇到了问题(尽管这无疑会发生);而是研究人员在设计阶段做出了太多妥协。无论研究者是依赖现有的数据库,在组织中积极收集数据,还是进行实验研究,妥协都是研究过程中的现实问题。挑战是不要妥协太多(Kulka,1981)。


研究设计的实用方法首先假设大多数单一的研究设计在某种程度上(在有效性方面)存在缺陷。因此,对于一个强大的研究设计来说,最好的方法可能不是消除对有效性的威胁(尽管在设计过程中它们肯定可以减少),而是进行一系列的研究。一系列研究中的每一项都有其自身的缺陷,但这些研究结合起来,可能比任何单独的研究都能提供更强的推论和更具概括性的结果。在我们看来,多个研究和多个样本设计在组织科学和AMJ投稿中的利用率极低。我们鼓励研究人员考虑使用多个研究或样本,每一个都解决了另一个的缺陷。这可以通过将实地研究与实验室实验相结合(例如Grant&Berry,2011),或者通过测试多个行业数据集来评估研究结果的稳健性(例如Beck、Bruderl和Woywode,2008)。正如AMJ的“投稿须知”中所指出的,包含多个研究的、即使超过40页(指导性篇幅)的文章也是可以接受的。


提交给AMJ的投稿中,有很大一部分要么从未送审,要么在评审过程中由于设计的缺陷而表现不佳(即所有三位外审专家都建议退稿),当然这并不意味着AMJ出版的文章都是完美的。他们的设计有时不能完全回答他们潜在的问题,有时使用的是未经验证的量表,有时模型制定存在错误。在每一项研究中排除所有可能的有效性威胁几乎不可能,如此的话实证研究可能永远无法开展(Kulka,1981)。但是,在研究工作的设计阶段诚实地评估对有效性的威胁,并采取措施通过改进单个研究或进行多个研究来最小化这些威胁,将大大提高最终取得积极结果的可能性。



作者:

Joyce E. Bono

Universityof Florida

Gerry McNamara

MichiganState University

校译:

《南开管理评论》编辑部


文章出处:

Academy ofManagement Journal 2011, Vol. 54,No. 4, 657–660.

 

英文原文:

FROM THE EDITORS

PUBLISHING IN AMJ—PART 2: RESEARCH DESIGN

Most scholars, as part of their doctoral education, takea research methodology course in which they learn the basics of good researchdesign, including that design should be driven by the questions being asked andthat threats to validity should be avoided. For this reason, there is littlenovelty in our discussion of research design. Rather, we focus on common designissues that lead to rejected manuscripts at AMJ. The practical problemconfronting researchers as they design studies is that (a) there are no hardand fast rules to apply; matching research design to research questions is asmuch art as science; and (b) external factors sometimes constrain researchers’ability to carry out optimal designs (McGrath, 1981).

Access to organizations, the people in them, and richdata about them present a significant challenge for management scholars, but ifsuch constraints become the central driver of design decisions, the outcome isa manuscript with many plausible alternative explanations for the results,which leads ultimately to rejection and the waste of considerable time, effort,and money. Choosing the appropriate design is critical to the success of amanuscript at AMJ, in part because the fundamental design of a study cannot bealtered during the revision process. Decisions made during the research designprocess ultimately impact the degree of confidence readers can place in theconclusions drawn from a study, the degree to which the results provide a strongtest of the researcher’s arguments, and the degree to which alternativeexplanations can be discounted. In reviewing articles that have been rejectedby AMJ during the past year, we identified three broad design problems thatwere common sources of rejection: (a) mismatch between research question anddesign, (b) measurement and operational issues (i.e., construct validity), and(c) inappropriate or incomplete model specification.

 

Matching Research Question and Design

Cross-sectional data. Use of cross-sectional data is a common cause of rejection at AMJ, ofboth micro and macro research. Rejection does not happen because such data areinherently flawed or because reviewers or editors are biased against such data.It happens because many (perhaps most) research questions in managementimplicitly—even if not framed as such—address issues of change. The problemwith cross-sectional data is that they are mismatched with research questionsthat implicitly or explicitly deal with causality or change, strong tests ofwhich require either measurement of some variable more than once, ormanipulation of one variable that is subsequently linked to another. Forexample, research addressing such topics as the effects of changes inorganizational leadership on a firm’s investment patterns, the effects of CEOor TMT stock options on a firm’s actions, or the effects of changes in industrystructure on behavior implicitly addresses causality and change. Similarly, whenresearchers posit that managerial behavior affects employee motivation, that HRpractices reduce turnover, or that gender stereotypes constrain the advancementof women managers, they are also implicitly testing change and thus cannotconduct adequate tests with cross-sectional data, regardless of whether thatdata was drawn from a pre-existing data base or collected via an employeesurvey. Researchers simply cannot develop strong causal attributions withcross-sectional data, nor can they establish change, regardless of whichanalytical tools they use. Instead, longitudinal, panel, or experimental dataare needed to make inferences about change or to establish strong causalinferences. For example, Nyberg, Fulmer, Gerhart, and Carpenter (2010) createda panel set of data and used fixed-effects regression to model the degree towhich CEO-shareholder financial alignment influences future shareholderreturns. This data structure allowed the researchers to control for crossfirm heterogeneityand appropriately model how changes in alignment within firms influencedshareholder returns.

Our point is not to denigrate the potential usefulness ofcross-sectional data. Rather, we point out the importance of carefully matchingresearch design to research question, so that a study or set of studies iscapable of testing the question of interest. Researchers should ask themselvesduring the design stage whether their underlying question can actually be answeredwith their chosen design. If the question involves change or causalassociations between variables (any mediation study implies causalassociations), cross-sectional data are a poor choice.

Inappropriate samples and procedures. Much organizational research, including that published inAMJ, uses convenience samples, simulated business situations, or artificialtasks. From a design standpoint, the issue is whether the sample and proceduresare appropriate for the research question. Asking students with limited workexperience to participate in experimental research in which they make executiveselection decisions may not be an appropriate way to test the effects of genderstereotypes on reactions to male and female managers. But asking these same studentsto participate in a scenario-based experiment in which they select the managerthey would prefer to work for may present a good fit between sample andresearch question. Illustrating this notion of matching research question withsample is a study on the valuation of equity-based pay in which Devers, Wiseman,and Holmes (2007) used a sample of executive MBA students, nearly all of whomhad experience with contingent pay. The same care used in choosing a sampleneeds to be taken in matching procedures to research question. If a studyinvolves an unfolding scenario wherein a subject makes a series of decisions overtime, responding to feedback about these decisions, researchers will be wellserved by collecting data over time, rather than having a series of decisionand feedback points contained in a single 45 minute laboratory session.

Our point is not to suggest that certain samples (e.g.,executives or students) or procedures are inherently better than others.Indeed, at AMJ we explicitly encourage experimental research because it is anexcellent way to address questions of causality, and we recognize that importantquestions—especially those that deal with psychological process—can often be answeredequally well with university students or organizational employees (see AMJ’sAugust 2008 From the Editors [vol. 51: 616–620]). What we ask of authors—whethertheir research occurs in the lab or the field—is that they match their sampleand procedures to their research question and clearly make the case in theirmanuscript for why these sample or procedures are appropriate.

 

Measurement and Operationalization

Researchers often think of validity once they begin operationalizingconstructs, but this may be too late. Prior to making operational decisions, anauthor developing a new construct must clearly articulate the definition andboundaries of the new construct, map its association with existing constructs, andavoid assumptions that scales with the same name reflect the same construct andthat scales with different names reflect different constructs (i.e., jingle janglefallacies [Block, 1995]). Failure to define the core construct often leads toinconsistency in a manuscript. For example, in writing a paper, authors may initiallyfocus on one construct, such as organizational legitimacy, but later couch thediscussion in terms of a different but related construct, such as reputation orstatus. In such cases, reviewers are left without a clear understanding of theintended construct or its theoretical meaning. Although developing theory is nota specific component of research design, readers and reviewers of a manuscriptshould be able to clearly understand the conceptual meaning of a construct andsee evidence that it has been appropriately measured.

Inappropriate adaptation of existing measures. A key challenge for researchers who collect field data isgetting organizations and managers to comply, and survey length is frequently apoint of concern. An easy way to reduce survey length is to eliminate items.Problems arise, however, when researchers pick and choose items from existing scales(or rewrite them to better reflect their unique context) without providingsupporting validity evidence. There are several ways to address this problem. First,if a manuscript includes new (or substantially altered measures), all the itemsshould be included in the manuscript, typically in an appendix. This allowsreviewers to examine the face validity of the new measures. Second, authorsmight include both measures (the original and the shortened versions) in asubsample or in an entirely different sample as a way of demonstrating high convergentvalidity between them. Even better would be including several other keyvariables in the nomological network, to demonstrate that the new or alteredmeasure is related to other similar and dissimilar constructs.

Inappropriate application of existing measures. Another way to raise red flags with reviewers is to useexisting measures to assess completely different constructs. We see thisproblem occurring particularly among users of large databases. For example, ifprior studies have used an action such as change in format (e.g., by arestaurant) as a measure of strategic change, and a submitted paper uses thissame action (change in format) as a measure of organizational search, we areleft with little confidence that the authors have measured their intended construct.Given the cumulative and incremental nature of the research process, it iscritical that authors establish both the uniqueness of their new construct, howit relates to existing constructs, and the validity of their operationalization.

Common method variance. We see many rejected AMJ manuscripts in which data are not onlycrosssectional, but are also assessed via a common method (e.g., a survey willhave multiple predictor and criterion variables completed by a singleindividual). Common method variance presents a serious threat to interpretationof observed correlations, because such correlations may be the result ofsystematic error variance due to measurement methods, including rater effects,item effects, or context effects. Podsakoff, MacKenzie, Lee, and Podsakoff(2003) discussed common method variance in detail and also suggested ways toreduce its biasing effects (see also Conway & Lance, 2010).

Problems of measurement and operationalization of keyvariables in AMJ manuscripts have implications well beyond psychometrics. At aconceptual level, sloppy and imprecise definition and operationalization of keyvariables threaten the inferences that can be drawn from the research. If the natureand measurement of underlying constructs are not well established, a reader isleft with little confidence that the authors have actually tested the modelthey propose, and reasonable reviewers can find multiple plausibleinterpretations for the results. As a practical matter, imprecise operational andconceptual definitions also make it difficult to quantitatively aggregateresearch findings across studies (i.e., to do meta-analysis).

 

Model Specification

One of the challenges of specifying a theoretical modelis that it is practically not feasible to include every possible controlvariable and mediating process, because the relevant variables may not exist inthe database being used, or because organizations constrain the length ofsurveys. Yet careful attention to the inclusion of key controls and mediating processesduring the design stage can provide substantial payback during the reviewprocess.

Proper inclusion of control variables. The inclusion of appropriate controls allows researchers todraw more definitive conclusions from their studies. Research can err on theside of too few or too many controls. Control variables should meet threeconditions for inclusion in a study (Becker, 2005; James, 1980). First, thereis a strong expectation that the variable be correlated with the dependent variableowing to a clear theoretical tie or prior empirical research. Second, there isa strong expectation that the control variable be correlated with thehypothesized independent variable(s). Third, there is a logical reason that thecontrol variable is not a more central variable in the study, either ahypothesized one or a mediator. If a variable meeting these three conditions isexcluded from the study, the results may suffer from omitted variable bias.However, if control variables are included that don’t meet these three tests,they may hamper the study by unnecessarily soaking up degrees of freedom orbias the findings related to the hypothesized variables (increasing either typeI or type II error) (Becker, 2005). Thus, researchers should think carefullyabout the controls they include—being sure to include proper controls but excludingsuperfluous ones.

Operationalizing mediators. A unique characteristic of articles in AMJ is that theyare expected to test, build, or extend theory, which often takes the form ofexplaining why a set of variables are related. But theory alone isn’t enough;it is also important that mediating processes be tested empirically. Thequestion of when mediators should be included in a model (and which mediators) needsto be addressed in the design stage. When an area of inquiry is new, the focusmay be on establishing a causal link between two variables. But, once anassociation has been established, it becomes critical for researchers toclearly describe and measure the process by which variable A affects variableB. As an area of inquiry becomes more mature, multiple mediators may need to beincluded. For example, one strength of the transformational leadershipliterature is that many mediating processes have been studied (e.g., LMX [Kark,Shamir, & Chen, 2003; Pillai, Schriesheim, & Williams, 1999; Wang, Law,Hackett, Wang, & Chen, 2005]), but a weakness of this literature is thatmost of these mediators, even when they are conceptually related to each other,are studied in isolation. Typically, each is treated as if it is the uniqueprocess by which managerial actions influence employee attitudes and behavior,and other known mediators are not considered. Failing to assess known, andconceptually related mediators, makes it difficult for authors to convincereviewers that their contribution is a novel one.

 

Conclusion

Although research methodologies evolve over time, therehas been little change in the fundamental principles of good research design:match your design to your question, match construct definition withoperationalization, carefully specify your model, use measures with establishedconstruct validity or provide such evidence, choose samples and procedures thatare appropriate to your unique research question. The core problem with AMJsubmissions rejected for design problems is not that they were well-designedstudies that ran into problems during execution (though this undoubtedly happens);it is that the researchers made too many compromises at the design stage.Whether a researcher depends on existing databases, actively collects data inorganizations, or conducts experimental research, compromises are a reality ofthe research process. The challenge is to not compromise too much (Kulka,1981).

A pragmatic approach to research design starts with theassumption that most single-study designs are flawed in some way (with respectto validity). The best approach, then, to a strong research design may not liein eliminating threats to validity (though they can certainly be reduced duringthe design process), but rather in conducting a series of studies. Each studyin a series will have its own flaws, but together the studies may allow forstronger inferences and more generalizable results than would any single studyon its own. In our view, multiple study and multiple sample designs are vastlyunderutilized in the organizational sciences and in AMJ submissions. Weencourage researchers to consider the use of multiple studies or samples, eachaddressing flaws in the other. This can be done by combining field studies withlaboratory experiments (e.g., Grant & Berry, 2011), or by testing multipleindustry data sets to assess the robustness of findings (e.g., Beck, Bruderl,& Woywode, 2008). As noted in AMJ’s “Information for Contributors,” it isacceptable for multiple study manuscripts to exceed the 40-page guideline.

A large percentage of manuscripts submitted to AMJ thatare either never sent out for review or that fare poorly in the review process(i.e., all three reviewers recommend rejection) have flawed designs, butmanuscripts published in AMJ are not perfect. They sometimes have designs thatcannot fully answer their underlying questions, sometimes use poorly validatedmeasures, and sometimes have misspecified models. Addressing all possiblethreats to validity in each and every study would be impossibly complicated,and empirical research might never get conducted (Kulka, 1981). But honestlyassessing threats to validity during the design stage of a research effort andtaking steps to minimize them—either via improving a single study or conductingmultiple studies—will substantially improve the potential for an ultimatelypositive outcome.

Joyce E. Bono

Universityof Florida

Gerry McNamara

Michigan StateUniversity

REFERENCES

Beck, N., Bruderl, J., & Woywode, M. 2008. Momentumor deceleration? Theoretical and methodological reflections on the analysis oforganizational change. Academy of Management Journal, 51:413–435.

Becker, T. E. 2005. Potential problems in the statisticalcontrol of variables in organizational research: A qualitative analysis withrecommendations. Organizational Research Methods, 8: 274–289.

Block, J. 1995. A contrarian view of the five-factorapproach to personality description.Psychological Bulletin, 117:187–215.

Conway, J. M., & Lance, C. E. 2010. What reviewersshould expect from authors regarding common method bias in organizationalresearch. Journal of Business and Psychology, 25: 325–334.

Devers, C. E., Wiseman, R. M., & Holmes, R. M. 2007.The effects of endowment and loss aversion in managerial stock optionvaluation. Academy of Management Journal,50: 191–208.

Grant, A. M., & Berry, J. W. 2011. The necessity ofothers is the mother of invention: Intrinsic and prosocial motivations,perspective taking, and creativity. Academy of Management Journal, 54:73–96.

James, L. R. 1980. The unmeasured variables problem inpath analysis. Journal of Applied psychology, 65: 415–421.

Kark, R., Shamir, B., & Chen, G. 2003. The two facesof transformational leadership: Empowerment and dependency. Journal ofApplied Psychology, 88: 246–255.

Kulka, R. A. 1981. Idiosyncrasy and circumstance. AmericanBehavioral Scientist, 25: 153–178.

McGrath, J. E. 1981. Introduction. AmericanBehavioral Scientist, 25: 127–130.

Nyberg, A. J., Fulmer, I. S., Gerhart, B., &Carpenter, M. A. 2010. Agency theory revisited: CEO return and shareholderinterest alignment. Academy of Management Journal, 53: 1029–1049.

Pillai, R., Schriesheim, C. A., & Williams, E. S.1999. Fairness perceptions and trust as mediators for transformational andtransactional leadership: A two-sample study. Journal of Management, 25:97–933.

Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N.2003. Common method biases in behavioral research: A critical review of theliterature and recommended remedies.Journal of Applied Psychology, 25:879–903.

Wang, H., Law, K. S., Hackett, R. D., Wang, D., &Chen, Z. X. 2005. Leader-member exchange as a mediator of the relationshipbetween transformational leadership and followers’ performance andorganizational citizenship behavior. Academy of Management Journal, 48:420–432.






【免责声明】

1、我们尊重原创,也注重分享。本公众平台原创文章版权归作者和平台共同所有,转载文章其版权归原作者和来源媒体平台所有;

2、本公众平台转载内容包括视频、文章和广告等,仅以信息传播和分享为目的,供感兴趣的读者学习参考之用,未经授权禁止用于商业用途,如无意中侵犯了哪个媒体、公司 、企业或个人等的知识产权,请联系处理;

3、本平台对转载和分享的内容、陈述、观点保持中立,不对所包含内容的真实性、准确性和合法性提供任何明示或暗示的保证,本公众平台将不承担任何责任。



 加入学会,共襄盛举!

图文编辑:张姝婷  审校:张希贤

欢迎各界朋友赐稿:

学会邮箱   cmau@cmau.org.cn 

执委会邮箱 cmau-ec@cmau.org.cn


中国高等院校市场学研究会简介


中国高等院校市场学研究会(Chinese Marketing Association of Universities,英文缩写CMAU)成立于1984年元月,是经中华人民共和国民政部批准,由全国各高等院校从事市场营销学教学、研究的专家、学者及企事业单位自愿组成的非营利性学术团体。学会的主管单位是中华人民共和国教育部,现任会长是北京大学光华管理学院符国群教授。

本会宗旨

团结市场营销理论与实务界人士,遵守国家法律、法规和政策,本着“百花齐放、百家争鸣”的原则,组织各种形式的研讨和交流,为创造、传播新的市场营销知识,为繁荣中国市场营销学术研究,为提升我国企事业单位营销管理水平做出贡献。

业务范围

  • 通过定期(如年会)或不定期(如不同专题的研讨会)的形式,为全国高校从事市场营销教学、研究的专业人员提供交流的机会;

  • 通过出版物,为全国高校从事市场营销教学、研究的专业人员和社会各界的有关人士提供发表研究成果的园地;

  • 组建全国性的市场营销教学、研究案例库、资料库,建立全国性的市场营销研究信息网络;

  • 通过各种方式为社会各界培训市场营销教学、市场营销管理实践人才;

  • 以各种方式为企业界及其他部门提供市场营销相关的专业援助,如咨询等;

  • 收集国内外的市场营销理论的研究动态,与国外有关市场营销研究的机构、团体建立不同形式的合作、交流关系。

学会秘书处

通讯地址:100871海淀区颐和园路5号北京大学光华1号楼

联系电话:010-62757952

Email地址:cmau@cmau.org.cn


推荐阅读



您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存