Work teams are an ever-growing structure as organizations seek to become more agile and achieve better outcomes (Bersin, 2016; Deloitte, 2018). Therefore, organizational researchers seek to accurately recognize and understand various aspects of team dynamics, which are often measured by capturing team-member perceptions. When these perceptions are shared among team members, team consensus constructs (e.g., team cohesion, conflict, psychological safety, satisfaction, task interdependence, liking, and viability) shed light on team functioning and performance. Researchers typically assess the psychometric properties of these measures at the individual level (e.g. factor analysis, covariance/variance matrices) without examining if the strength of and relationship among measures’ indicators vary at the between-team level where the constructs theoretically operate (Carless & De Paola, 2000; Edmondson, 1999; Jehn & Mannix, 2001; Van der Vegt et al., 2001). This misalignment between theory and measurement brings into question the quality of measures of team consensus constructs and the theoretical development based on the research associated with them. I examined the extent to which this misalignment is problematic and potential reasons for cross-level measurement and structural variance in and among measures. I used archival data to examine over 3,000 project-based teams using R and MPlus assessing measures in a multilevel factor analytic framework and examined for cross-level measurement and structural variance. The results demonstrated measurement quality should be assessed at the theoretically relevant level of analysis, the degree of psychometric isomorphism is in part a feature of within-team agreement and the wording of the measure, and there are consequences of misalignment regarding convergent and discriminant validity. Future research needs to address the need for discriminant validity among some measures and the potential for construct proliferation.