Research Methods & Data Analysis Essentials
Evaluation Types Explained
Needs Assessment
Identifies what people need versus what they currently have.
Example: Survey students to see if mental health services are needed on campus.
Process Evaluation
Looks at how the program is being run.
Example: Are tutoring sessions happening on time and following the plan?
Outcomes and Impact Evaluation
Measures whether the program worked (did it make a difference?).
Example: Compare student stress levels before and after a mindfulness program.
Evaluability Assessment
Checks if a program is ready to be evaluated.
Example: Review a new program to make sure it has clear goals and trackable data.
Validity vs. Reliability in Research
Validity
Are you measuring what you’re supposed to?
Face Validity: Looks right on the surface.
Content Validity: Covers all relevant areas of the concept.
Criterion Validity: Matches a real-world outcome.
Construct Validity: Truly measures the underlying idea or trait.
Reliability
Are the results consistent?
Test-Retest: Same results over time.
Split-Half: Two halves of a test give similar results.
Inter-Rater: Different people give similar scores.
Can something be reliable but not valid? Yes. Example: A broken thermometer gives the same wrong temperature every time (reliable, but not valid).
How can a professor check if their exam is reliable? Use a split-half test or compare results over time.
Effective Survey Question Writing
Common Problems in Survey Questions
Double-Barreled: Asks two things in one.
Bad: “Do you like school and your teacher?”
Fix: Ask two separate questions.
Double Negatives: Confusing phrasing.
Bad: “Do you not disagree…?”
Fix: Keep it simple and direct.
Biased Wording: Leads the person to answer a certain way.
Bad: “How great was your experience?”
Fix: “How would you rate your experience?”
Fence-Sitting & Floating: People choose the middle or a random answer when confused or indifferent.
Why Clarity is Important in Surveys
Confusing questions lead to bad data.
Clear questions ensure accurate answers.
Understanding Survey Methods
Interview Surveys
Pros: Deep information.
Cons: Time-consuming and expensive.
Mailed Surveys
Pros: Cheap and wide reach.
Cons: Slow and often low response rate.
Phone Surveys
Pros: Fast.
Cons: People might not answer or hang up.
Self-Administered Surveys (Online/Paper)
Pros: Good for privacy and sensitive topics.
Cons: Must be very clear — no one is there to help explain.
When to Avoid Certain Survey Types
Avoid phone or mail surveys if you’re asking follow-up or complicated questions.
Qualitative vs. Quantitative Research
Qualitative Research
Focuses on experiences, stories, and open-ended responses.
Example: Interview students about how they feel about finals week.
Quantitative Research
Focuses on numbers, statistics, and testing hypotheses.
Example: Survey students on hours studied versus GPA.
What is Saturation in Research?
When new interviews or data collection stop yielding new information — you’ve heard it all.
Basic Statistics for Data Analysis
Using Correlation
Use a correlation when you’re checking the relationship between two variables.
Report: r (correlation coefficient) + p-value.
Example: Study time and GPA.
Using a T-Test
Use a t-test when you’re comparing the average of two groups.
Example: Compare test scores of students who used flashcards versus those who didn’t.
Using a Chi-Square Test
Use a chi-square test when you’re comparing two categorical variables.
Example: Gender and voting preference.
Using Frequencies
Use frequencies when summarizing how many people answered a certain way.
Example: 60% said “yes,” 40% said “no.”
Understanding Statistical Significance
If p-value is less than 0.05, the result is considered real (not just due to chance).
If p-value is greater than 0.05, the result is likely random.
Important: Something can be statistically significant but not actually have much practical importance in real life.
How to Interpret a T-Test
Step-by-Step Interpretation:
Check the p-value:
Less than 0.05 = Significant (real difference).
Greater than 0.05 = Not Significant (might be random).
Look at the group means:
Higher mean typically indicates better performance or a stronger presence of the measured trait.
Check the t-value:
A higher absolute t-value indicates a stronger difference, but the p-value is what tells you if it’s statistically significant.
T-Test Interpretation Example:
Given: t = 2.75, p = 0.01; Group A Mean = 85, Group B Mean = 90
Interpretation:
The p-value (0.01) is less than 0.05, indicating a statistically significant difference.
Group B (Mean = 90) performed better than Group A (Mean = 85).
Data Analysis Scenarios
Scenario 1: Comparing Two Means
Question: Compare two means?
Answer: Use a t-test.
Scenario 2: Comparing Two Categories
Question: Compare two categorical variables (like yes/no or male/female)?
Answer: Use a chi-square test.
Scenario 3: Interpreting a T-Test Result
Question: Interpret this: t = 3.45, p = 0.05; Group A Mean = 85.5, Group B Mean = 89.2
Interpretation:
This result is barely statistically significant (right on the edge of the 0.05 threshold).
Group B (Mean = 89.2) performed slightly better than Group A (Mean = 85.5).
Primary vs. Secondary Data
Primary Data
Data you collect yourself for a specific research purpose.
Example: You conduct your own survey or interviews.
Secondary Data
Data already collected by someone else for a different purpose.
Example: Census data, research databases, public reports.
Why Use Secondary Data?
It saves time.
It’s often cheaper.
You can access large or national data sets that would be impossible to collect yourself.