A Comparison of Student Success Outcomes Related to the Non-Standardized and Standardized Texas Success Initiative

In 2015, the Texas Higher Education Coordinating Board (THECB) adopted the 60x30TX plan to increase Higher Education (HE) access and completion for all state students. The target for this plan asserts that by 2030, at least 60 percent of Texans ages 25-34 will have a certificate or degree. Among the initiatives tied to the overall goal to raise HE inclusion rates, is the Texas Success Initiative (TSI) which was formally launched a few years earlier in 2013. The TSI lays out specific requirements for uniform placement testing standards for students entering community colleges to measure college-level preparedness. The purpose of this study was to determine if placement is affected by the instrument applicants use, specifically, the non-standardized or the standardized version of the Texas Success Initiative Assessment (TSIA). Final grades of applicants placed into Freshman Composition (ENGL 1301), instead of Developmental Education (DE), served as the measure of student success appropriate for this study. The study sample was selected from two large urban community colleges in Texas. Within the parameters of this study, findings indicated that non-standardized and standardized placement tests have no effect on student success related to placement into ENGL 1301. These findings highlight the need for additional research surrounding the comparative impact between standardized and non-standardized testing, and the subsequent overall impact upon degree or certificate completion.


Introduction
Standardization exists in many forms in higher education, including processes, and practices related to college admission. For the purpose of this study standardization of admissions testing instruments were the focus. According to the American Association of Community Colleges (American Association of Community Colleges (AACC), 2017a), most four-year colleges have historically relied on and continue to rely on either the American College Test (ACT) or the Scholastic Aptitude Test (SAT), both of which are standardized. Conversely, most community colleges used either the COMPASS or the ACCUPLACER, but standardization practices are not in place, as these instruments can be, and are, customized per institution (Martorell et al., 2013). In the community college system, applicants" scores on the COMPASS and ACCUPLACER determine admission and applicant level of academic ability indicating the appropriate level of courses in which students are eligible to enroll. While underprepared, or underperforming students can be rejected from four-year colleges, applicants are rarely turned down by community colleges, and instead are enrolled in Developmental Education (DE) classes. While placement testing increases access to HE in comparison to admissions examinations, states continue to limit the number of community colleges within certain geographic locations. Thus, the consequences of non-standardized placement tests render limited alternatives for community college applicants. Some studies indicate that access to HE via community colleges is not problematic, but that DE policies should be explored (Belfield and Crosta, 2012). Thus, standardization of placement testing might influence DE requirements more so than HE access.
Open enrollment policies at community colleges are facilitated by supporting remediation, creating an intersection of access to HE, and availability of DE (American Association of Community Colleges (AACC), 2017a). Nationally, much debate exists regarding DE placement policies (Belfield and Crosta, 2012), and the benefits of DE are challenged by many studies. Some indicate that DE has no proven influence on student achievement (Belfield and Crosta, 2012). Other research indicates that DE has a positive effect on student achievement (Crisp and Delgado, 2014), or that DE has a negative effect on student achievement (Belfield and Crosta, 2012). Challenges to DE programs include budgetary considerations, as DE is estimated to be a $1 billion annual expenditure nationally (as cited by Brenneman & Harlow in Martorell et al. (2013). Additionally, some applicants could be using placement into DE to decide whether, or not, to enroll in college (Martorell et al., 2013).
Students testing into DE courses represent approximately one-third of all new students in public HE (as cited by NCES in Martorell et al. (2013). While this is a conservative estimate, DE is viewed as a gateway to HE, emphasizing the importance of exploring the effectiveness of varying types of placement tests (Belfield and Crosta, 2012). As access to HE is dependent on admissions exams in four-year colleges, and on placement tests in community colleges, the differences in testing could make public the advantages, or disadvantages, that may exist in standardized tests, affecting mostly four-year college applicants, and any possible advantages, or disadvantages may exist in non-standardized tests, affecting mostly community college applicants. There can be far reaching consequences to non-standardized tests, which affect more minority, and underperforming students, as they are more likely to apply to community colleges, often as their only option (Engberg and Wolniak, 2014).
While standardization is practiced by four-year colleges in their use of the ACT, and SAT (Martorell et al., 2013), each institution, including public four-year colleges, has the autonomy to admit students using whatever ACT, or SAT scores they choose, in effect setting their own cutoff scores. Community colleges also use cutoff scores autonomously, but they do not necessarily commit to standardized instruments, instead most often choosing different versions of the COMPASS, and ACCUPLACER (Silver, 2016). What standardization does exist in community college instruments is voluntary, but Texas has chosen to require it, including standardizing cutoff scores (Fulton, 2012). The influence of standardization of placement test instruments in community colleges in Texas was the focus of this study. Fulton (2012) found statewide standardization in placement testing for community colleges in thirteen states, and that seventeen states had adopted standardization without state mandates. Into the late 1990s, Texas"s use of the Texas Academic Skills Program (TASP) test was designed only to facilitate placement into DE courses, as state policy forbid it from influencing acceptance into community colleges. Conversely, the trend now is for the ACCUPLACER to be the deciding factor in both acceptance, and placement into DE courses, if the need is indicated (Martorell et al., 2013). Access to HE remains intertwined in DE programs, as remediation is the only level indicated for many community college applicants.

Statement of the Problem
Although community colleges use standardized instruments less often, the requirement of open-enrollment policies (American Association of Community Colleges (AACC), 2017a) increases HE access for more applicants. Standardization, however, could increase opportunities further. Some standardized tests exist for mandatory use by community colleges, but these are limited to the states in which they were voluntarily created. No national standards exist for community college placement testing, and community colleges are among the few options for students, whom four-year colleges are not required to accept.

Purpose of the Study
The purpose of this quasi-experimental ex post-facto study was to explore standardized placement testing, specifically the standardized English Texas Success Initiative Assessment (TSIA), referred to as the Texas Success Initiative (TSI), in public community colleges in Texas, and to examine the possibility that standardization might improve access to HE, and consequently, to student success. This was accomplished through exploration of student achievement after placement into college level courses via the TSI. Final grades for students placed into, and completing Freshman Composition in Texas (ENGL 1301) were compared. The data were provided by two large urban community colleges in Texas.

Review of the Literature
For many years, the five most commonly utilized college assessments were the ACT, SAT, ACCUPLACER, ASSET, and COMPASS (Fields and Parsad, 2012). The ACCUPLACER, ASSET, and COMPASS used the same mean cutoff scores for reading (Fields and Parsad, 2012). Most colleges narrowed their choices to the COMPASS, or the ACCUPLACER, but the COMPASS was discontinued in 2016 (Pivik, 2016). In prior years, Pearson Education (Hughes and Scott-Clayton, 2011) reported that 62% of colleges administered the ACCUPLACER. In comparison, their competition the COMPASS was administered by 46% of colleges (Hughes and Scott-Clayton, 2011). The overlap indicates the result of flexibility on the part of the vendors in response to colleges seeking to customize their tests, and colleges choosing different tests, for different subjects, from different vendors. For example, ACCUPLACER had edged out competitors including COMPASS before it was discontinued by providing 26% of all math, and 19% of all reading placement tests (Ristow and Parsad, 2012). As a specific example, Texas partnered with Pearson to create both the Texas Academic Skills Program (TASP) test in 2008, and later the TSI which became the statewide requirement in fall 2013. (Hughes and Scott-Clayton, 2010), When comparing COMPASS to ACCUPLACER, neither has proved to be a superior evaluation tool due to the severe error rates of both instruments (Belfield and Crosta, 2012). Still, it is rare for an institution to admit a student without some type of required academic ability assessment. Along the same line, 71% of all colleges utilized some kind of math test, and 53% a reading test. Public colleges commonly utilize required academic ability assessments. Public community colleges used placement tests for math 100% of the time, and 94% of the time for reading. The recommended ACCUPLACER cutoff score nationwide on a 20 to 120 scale, was 70 for Elementary Algebra, 57 for College-Level Mathematics, and 76 for Reading Comprehension (Fields and Parsad, 2012).

Standardization Versus Non-Standardization
The ACT and the SAT are standardized, and students are known to shop four-year colleges based on their knowledge of varying cutoff scores (Fletcher, 2014), aware that the instruments are the same, but that the cutoff scores vary from institution to institution. Four-year colleges can accept, or reject students based on their scores, then move on with the acceptance process, with the most selective institutions intentionally limiting access the most (Fields and Parsad, 2012). Community colleges, however, tend not to use standardized instruments, instead tailoring them to suit their individual goals. Because of the open enrollment mission of community colleges, it is unlikely that applicants at community public colleges will be rejected (American Association of Community Colleges (AACC), 2017a), but it is most likely that low-scoring applicants will be required to take DE classes (Silver, 2016).
The trend toward standardization of college readiness in any form has been slow in coming. In 2008, Collins (Martorell et al., 2013) noted that nineteen states required the same instruments, and the same "passing standards" (p. 4). However, in 2010, Hughes and Scott-Clayton described statewide standardization of college placement as a beginning trend affecting mostly community colleges, and reported that variation remained in the policies in existence at the time. Also in 2010, (Shulock, 2010) recommended that states be monitored for development of standardized instruments, and uniformity in measurement for college readiness, integrating K-12 to prepare applicants. Fulton (Melguizo et al., 2014), noted an increase in the trend toward statewide legislation, as thirteen states had adopted standardized instruments, and standardized cutoff scores by 2012.
These variations in community college placement testing continue, but it is mostly at the local level that the trend is toward standardization, and only within states, not nationally. With some exceptions, mandatory DE placement, if indicated, is also growing (Hughes and Scott-Clayton, 2010). Some research indicates that standardization has been found to offer a better focus on preparation for HE (especially for lower SES applicants), better accuracy, ease of transfer to four-year colleges, and ease of further assessment, and research by state overseers, according to the National Center for Public Policy and Higher Education (NCPPHE), the Southern Regional Education Board (SREB), and Prince (Melguizo et al., 2014). Universal community college testing results have been assessed by states transitioning to standardization, revealing mixed results (e.g., in Illinois) which were found to be significant. Colorado"s and Maine"s results were found to be positive, but insignificant. Maine"s use of a standardized placement test increased their enrollment by 4%-6% overall, and by 10% for students refusing to take the SAT (Page and Scott-Clayton, 2016).
Conversely, the reverse of standardization, non-standardization, or customization of placement testing, offers the advantage of revealing if an individual might lack any academic ability at all to succeed at the college level (Hughes and Scott-Clayton, 2010), but commitment to support for struggling students is not always the motivation to resist standardized placement testing. Around the U.S., other states have explored placement test options for public community colleges, with notable impact on DE programs, some arguing that DE serves only to increase time taken to graduate, rather than contributing to student success. For example, Connecticut state law does not permit DE at all, unless it is embedded in other coursework (Zhang et al., 2013). In Florida, there has been a decrease in placement testing, due to exemption allowances for applicants who were high school students beginning in 2003 to the present, and for military veterans who are high school graduates (Zhang et al., 2013). Early results indicate a drop in DE course enrollment (Park et al., 2016), and lower success among students declining to take DE courses (Pain, 2016).
In Texas, the goal of the Texas Education Code §51.3062 Success Initiative (2011) was to require standardized placement testing, statewide, and DE remains an available option for accessing HE, mostly a requirement. The Texas Higher Education Coordinating Board (THECB) chose first to limit the number of assessments allowed (H. 1244, 2011). When this initiative expired in 2011, it was repealed. House Bill 1244, 82 nd Legislature, 2011 amended Texas Education Code §51.3062(f) by giving the THECB authority to require use of only one placement test, and to make cutoff scores uniform (THECB, n.d.).
There is also an example where placement test standardization at the regional level, had possible influence on access to HE. Like Texas, increased access to HE is a stated goal of public two-year community colleges in the New England states. A study found that up to 90% of one cohort of beginning students in New England tested as unprepared (i.e., in need of remediation). Connecticut, Rhode Island, Maine, and Vermont then created standardized testing policies only for their two-year colleges. The response in Massachusetts was to write a policy to explore standardized testing in both their two-year, and four-year colleges. The ACCUPLACER was most commonly used by all of these colleges, but there were no restrictions on standardizing the setting of cutoff scores used for placement, especially DE (Chan and Srey, 2012).
Some cite research that suggests that 41% of placement testing grade variance is explained by individual traits, such as how motivated a student is, their ability to self-regulate, and to be assertive (Zientek et al., 2013). This is not to say that non-standardized instruments address all placement needs, however, and this can be observed internationally. Looking beyond the U.S., the nation of Qatar, using a localized, non-standardized placement test, found 37% of their students failing the level of English that their placement scores indicated they would pass, and 15% of their students reporting having been placed into a level they found to be too difficult (Johnson and Riazi, 2016). In the same research, the authors assert that the claims of Hughes and Scott-Clayton (2011) in support of nonstandardized instruments has limits (Johnson and Riazi, 2016).

Cutoff Scores
Cutoff scores, referred to in some studies as cut scores, can interfere with insight on standardized placement instruments, because the method of varying cutoff scores can be used to place students regardless of placement test scores on standardized instruments. Both community colleges, and four-year colleges use cutoff scores as conditions of enrollment, and placement. Public four-year colleges have the autonomy to reject applicants, but among two-year colleges, particularly community colleges, initiatives aimed at inclusion, are far more common, due to their open enrollment missions (American Association of Community Colleges (AACC), 2017a). This makes community colleges unique, in that they must balance their open enrollment commitment with service to unprepared students, by providing such supports as DE. However, community colleges using non-standardized instruments, and cutoff scores exercise considerable control over applicants, and the request for information regarding clarification of these practices, was an unusual aspect of Ristow and Parsad (2012) study. Some of this information is available in the literature, such as a study that found that Virginia, North Carolina, and Connecticut use uniform cut scores (as cited in Melguizo et al. (2014)). Horst and DeMars (2016) note that there is some evidence of shared governance at some institutions in the form of faculty input on setting cutoff scores, while reporting that this is not always the policy.
When multiple measures are used, scores near the cutoff become less influential because multiple measures take precedence in determining admission, and placement (Bailey et al., 2013). Multiple measures create more inconsistency than variations in cutoff scores, or choice of instruments, as multiple measures allow the greatest autonomy in placement decisions (Melguizo et al., 2014). For example, in one study regarding international applicants, only 8% of colleges considered "Advanced Placement and International Baccalaureate scores" (p. 22), but 4% considered high school grades, and 3% or less considered other criteria (Fields and Parsad, 2012). Using different information to assess different cohorts of applicants is controversial, thus justifying surveys such as Ristow and Parsad (2012), seeking more information on cutoff scores.
The ACCUPLACER is among the majority of placement tests that vary most regarding cutoff scores nationally, significantly influencing the definition of "just academically prepared" (Fields and Parsad, 2012). Texas"s standardization of the ACCUPLACER (the TSI), offers limited comparison, other than between community colleges within the state (Fields and Parsad, 2012). Like other states that have set statewide standards for placement testing, Texas has involved the committee process in setting statewide cutoff scores (Secolsky et al., 2013). The same authors suggest preparing community college applicants for Freshman Composition, and College Algebra courses, and support statewide standardization of placement testing to address equity, transfer, and need for DE. Fields and Parsad (2012) suggested that further research is needed regarding the use of nationally standardized tests to facilitate placement of students into DE, full disclosure of cutoff scores, and exploration of why there is such variation between community and four-year colleges.

The Bubble
Some students fall between the minimum passing score, and what is required to test out of DE and into college level courses, instead. This range of scores is known as the bubble, and students scoring within this range are known as the bubble students. Bubbles can also be defined by individual institutions (e.g., to determine how close to the cutoff scores certain groups of students are) to design interventions, and this can be done without uniformity (Belfield and Crosta, 2012).
In Texas, the (College of the Mainland (COM), (n.d.)), in Texas City, defines bubble students as those falling below the cutoff score for college level courses in placement test results, but who are still placed, meaning they determine them to be capable of succeeding via DE. Multiple measures are included in the placement process, as well as holistic advising (Gray, 2015). Similarly, the Lone Star Community College system, the largest in Texas (McKinney, 2017), defines scoring near the cutoff as the bubble, and as justification for holistic advising. Under holistic advising, the Lone Star Community College system places students in DE, while also considering high school success in math, honors in reading, or writing, employment history, transportation issues, and the applicant"s goal for HE, and understanding of requirements, and processes in achieving their academic goals (Lone Star College System, (n.d.)). These are examples of standardization used to determine placement into DE, or college level, while not limiting flexibility, by offering support beyond acceptance.
The TSI requires standardization focused on enrollment, and multiple measures tend to be focused on advancing DE students into college level classes, but K-12 has different practices regarding both standardization, and bubbles in Texas. Cutoff scores can be set by state representatives, and adjusted based on autonomous judgments of how difficult the test is, and the final score on a state assessment can be required to factor into a student"s final grade in a course. Institutions can have different agendas for varying their cutoff scores, and ACCCUPLACER has been found to be among the most flexible of instruments to adjust for different purposes (Fields and Parsad, 2012).

The Relationship Between Placement Testing and Developmental Education
With the understanding that placement tests are so different from acceptance exams, comes the understanding that placement testing, and DE are enmeshed. Less academically prepared students who gain access to HE through open enrollment at community colleges are supported by DE programs (Belfield and Crosta, 2012).
Given that 60% of first year HE students take at least one DE course, according to NCPPHE and SREB (Melguizo et al., 2014), and that studies of specific measures are lacking regarding assessment and placement, challenges to DE funding seem counterintuitive to some researchers, as community colleges are specifically charged with providing these programs to students with these needs (Bailey et al., 2013). McKinney (2017) cite a National Student Clearinghouse study conducted in 2012, stating that 78% of 2010-2011 graduates in Texas began in community colleges, and went on to earn a bachelor"s degree. The Texas Public Higher Education Almanac (Texas Higher Education Coordinating Board (THECB), 2016a), reports, however, that the six-year graduation rate for students beginning at a community college, and earning a degree, or certificate, is only 27%. Again, emphasizing that DE is provided mostly by community colleges, it is important to distinguish between enrollment practices of community, and four-year colleges, and the resources required.
Whether DE is, or is not, a factor in the decision to require standardized placement testing, Texas"s choice is not the practice in all community colleges in the U.S. Standardization of acceptance exams, however, is the choice of most four-year colleges in every state. Moreover, even some of the same four-year colleges that do use standardized entrance exams, make an exception by offering more focused testing criteria for international applicants with limited English proficiency, but admittedly have found results limited to accepting, or rejecting, applicants based on not much more than entrance exam scores, which varied by institution, because multiple measures were also considered. Faculty of those applicants accepted for admission expressed issues with English proficiency as the applicants progressed (Ginther and Elder, 2014).
Multiple measures used for placement are met with a variety of types of DE methods, and community colleges have become creative in their andragogy. Tutoring, and other forms of supplemental instruction, is common at community colleges, and DE programs that have been found to be successful, improve retention by mainstreaming students into college level with various support systems, according to Adams, from his work in the Community College of Baltimore County (Deil-Amen, 2011). Texas House Bill HB 1244 (82R) added to placement testing additional requirements including diagnostics for NCBOs, DE, and support services (Morales-Vale, 2014). House Bill §4.55 required pre-assessment, including DE options such as course pairing, NCBOs, modular, and others, as well as financial aid, childcare, transportation, and tutoring. For students testing into DE, in order to be elevated to college level, House Bill §4.59 requires consideration of performance in DE, and NCBO classes in addition to the TSI (Morales-Vale, 2013).
Nationally, once placed into DE, there are varieties of courses, supports, and requirements used by institutions, including tutoring, and interaction with educators, which are known to contribute to student success, however it is defined (Glessner, 2015). Still, declining to participate in DE persists among some students. In one study, approximately 25% of DE students resisted accessing available support, citing stigma as the reason, concluding that college was not for them (Deil-Amen, 2011).

Traditional Students Versus Non-Traditional Students
Other examples of common terminology in HE include traditional students, and non-traditional students. Traditional students are recognized in HE as those who transition directly from high school to college, and nontraditional students are those not transitioning directly from high school, with the latter more often served by community colleges (Silver, 2016). Non-traditional students at all universities in Texas was 18%, and the nontraditional population at all community colleges in Texas represented 61% (Morales-Vale, 2014). Given the support systems in place at community colleges, these institutions are an appropriate choice for many beginning students in HE (American Association of Community Colleges (AACC), 2017b).
Community colleges serve the purpose of providing proving grounds for students whose high school work is not competitive enough for acceptance into four-year institutions, whether traditional, or non-traditional students, according to Pascarella and Terenzini Tovar (2015).

The Texas Success Initiative Assessment
Most Texas students are exempt from the TSI, for such reasons as high ACT, and SAT scores. Specifically, 65% are exempt in math, 63% in reading, and 62% in writing (Morales-Vale, 2014). The same author, in a 2013 study, explained efforts on the part of the state to improve access to HE for those who score below TSI requirements. As per §4.54 Exemptions/Exceptions (2016), DE is not required for those not indicated by placement testing and cutoff scores cannot be raised, by law. Assessment is required by §4.55 (2013), as is pre-assessment, including DE options such as course pairing, NCBOs, modular, and other methods, financial aid, childcare, transportation, and tutoring. To be considered college level, §4.59 (2015) requires taking the TSI, in addition to appropriate performance in DE, and NCBOs. Students who are ESOL (English Speakers of Other Languages) can obtain a waiver from the TSI at first, per §4.54 (2016), then must take the TSI after earning 15 ESOL DE credits, or attempt to pass first year college courses.
In 2011, HB 1244 (2011) required use of the standardized placement test, now known as the TSI, for all public community colleges in Texas, beginning in fall 2013, making Texas the first in the nation to require statewide standardization of community college placement testing, but other states have now created similar guidelines (Morales-Vale, 2014). In the last year of measurement of the non-standardized TSI, 34% of all Texas applicants to HE tested into DE. Broken down by institutional type, it was 49% of all two-year public college applicants, and 11% of all four-year public colleges. It is important to note that the first semester available to measure results of the standardized TSI, was the first semester that the standardized TSI became mandatory. This is because there were applicants who took the standardized TSI (newly mandated in fall 2013), during the drop/add period, and begin taking classes immediately, yielded final grades in ENGL1301 during that same semester, making that data applicable to the study.
The TSI represented a contractual partnership between the THECB and the College Board to provide the TSI to be used for two years, and THECB, and RAND are working together to research the TSI"s effectiveness under the 60X30TX agenda item IX-G draft (Texas Higher Education Coordinating Board (THECB), 2016b). According to the College Board, there is no limit on the number of times applicants can take the TSI (Can I retake the test?, 2015). As the TSI is now the standardized instrument for use throughout the state, passing scores are easily accessible, posted on the College Board (Interpreting your score, 2015), website:

College Readiness Cut Scores
Mathematics: a score ranging from 350-390 in the multiple-choice section. Reading: a score ranging from 351-390 in the multiple-choice section. Writing: a score of 5 in the essay section. You can also place in a college course if you receive a, or 4 on the essay and a score ranging from 363-390 on the multiple-choice section (p. 2).

Methodology
Student success taken from a cohort of those taking the TSI during the last three semesters of the nonstandardized instrument, as measured by grades in ENGL 1301, was compared to student success taken from a cohort of those tested during the first three semesters of the standardized TSI. For this comparison, the dependent variable was grades in ENGL 1301, and the independent variables were the non-standardized TSI, and the standardized TSI. In addition to gathering passing scores in ENGL 1301, demographics (age and gender) were considered.

Population and Sample
The population for this study was from two large urban community colleges in Texas with similar student demographics, who agreed to participate. The sample was created from the student populations at the two community colleges.

Instrumentation
The TSI was the instrument used to determine college readiness. Final grades in ENGL 1301 was the measurement used to determine college success. A comparison of results from the non-standardized TSI to the standardized TSI allowed for discussion of validity, and reliability of the instruments. It was assumed that the TSI was an appropriate predictor of student success, which was observed from both tests. Reliability may exist if there are no significant differences between the two instruments, but this might not coincide with validity.

Procedures
Placement test scores and final grades are archived at all community colleges in Texas, and data are made available upon request. The version of the TSI that current applicants take is a standardized instrument, with standardized cutoff scores, designed by educators. Previously, placement tests varied around the state, as did their cutoff scores. Use of the now standardized instrument, including standardized cutoff scores, became mandatory throughout the state, in fall 2013 (Morales-Vale, 2014).
Data from spring 2012 to spring 2013 (three regular semesters) were compared to data from fall 2013 to fall 2014 (three regular semesters). While mandatory administration of the standardized TSI began in fall 2013 (H. 1244, 2011), some students beginning in that term had tested at the start of that semester, during the drop/add period, ENGL 1301 results for the first cohort taking the new TSI were used at the end of fall 2013. Creating these cohorts allowed for a comparison of the last three regular semesters of students tested using the non-standardized TSI, to the first three semesters of students tested using the standardized TSI.
For this study, a sample of over four thousand students was studied. As placement tests are designed to predict student success, the relationship of TSI scores to performance in college level classes was the focus of this study, thus only those who tested into college level courses were considered. As ENGL 1301 is a gateway course to other college level studies, passing scores in ENGL 1301 were the appropriate measure of student success, thus the use of the passing score of A, B, C, or D in ENGL 1301 (Brown & Eklund, 2014). Those who tested into Developmental Education (DE) as a result of their TSI scores, did not, by definition, test into college level courses, thus their data were eliminated from the study.
An ANOVA test was run on placement test scores from the last three semesters of the non-standardized TSI for comparison of applicants tested having used the first three semesters of the standardized TSI, and passing grades in ENGL 1301. A two-way ANOVA test was run to test age and gender for significant difference in grades in ENGL 1301, and for interaction between age, and gender.

Validity and Reliability
The dependent variable in this study was the pass rate of students in ENGL 1301. Grades of A, B, C, or D having been universally accepted in HE to be the measure of student success (Brown and Eklund, 2014), were used. The effect on student success of the non-standardized TSI versus the standardized TSI was the focus of the study, thus their choice as independent variables. Age and gender, also served as independent variables to explore demographics that are measured similarly at most community colleges, thus the choice for the same categorization, when appropriate for this study.

Analysis of Data
No statistically significant difference was found, as p = .051, between standardized and non-standardized admissions testing. A two-way ANOVA was used to explore age, individually; gender, individually; and interaction of age and gender. Age was found to be statistically significant at p = .000, with the 26 and over group receiving higher final grades in ENGL 1301. Gender was found to be statistically significant at p = .001, with females scoring better in ENGL 1301. Interaction between age and gender, at p = .873, was not found to be statistically significant. Partial eta squared was weak overall at .000. Standardized testing does not appear to impact student success. A oneway ANOVA, which revealed no significant difference on final grades in ENGL 1301 among students based on which TSI they took. Between groups analysis revealed no statistically significant difference among the two different placement test instruments, F (1, 4015) = 3.82, p = .051, partial eta squared = .001.
Having found no statistically significant difference from the one-way ANOVA, the two-way ANOVA was then used to test factors of age, individually, gender, individually, and interaction of age and gender. Females scored better, M = 2.68 than males, M = 2.50. Regarding interaction between age and gender, it was found that there was not a statistically significant interaction between the effects of age and gender on ENGL 1301 final grade, F (5, 4011) = .14, p = .873, ɳ 2 = .000. Age was statistically significant (p = .000). Similarly, gender was also statistically significant (p = .001). This study found non-standardized and standardized placement testing to have no effect on student success.

Results and Conclusions
This study examined placement testing practices in community colleges in Texas. Specifically, differences in student success were explored among students who took the non-standardized TSI, and those who took the standardized TSI. Texas is not the only state that requires use of only one standardized placement test. There are also states that share the same practice, and uniform cutoff scores to determine placement into community colleges. This creates an opportunity for other states to perform similar studies, and for states to compare results. In addition, as K-12 increases efforts to transition students into HE, the need for equitable, effective, organized assessment of their ability to succeed academically continues to be addressed by instruments that are also faster and less complicated to score than non-standardized methods.
Findings within this study indicated that student success, as measured by final grades in ENGL 1301 comparing the non-standardized TSI cohort to the standardized TSI cohort, were not statistically significant. Furthermore, interactions between age and gender were not statistically significant as well. However, the results indicated statistical significance in final ENGL 1301 scores based on gender. In addition, statistical significances were shown in the age group overall, with the 26 and over group scoring better than the 19-26, and 18 and under age groups. These results shed light upon the need for further consideration of existing placement policies and practices within community colleges. The results also point toward the need for in-depth studies by state oversight entities to examine the statewide implementation of uniform cut-off scores and placement waiver decisions. A thorough study of the different types of assessment measures should also be conducted to examine the actual impact upon degree and certificate completion. As well, a revisitation of the overall effectiveness of standardized instruments and regular assessments could determine necessary areas of improvement and may invite greater access to higher education.