Categories

Heart Rate Analysis: Example of t-test Using MS Excel Analysis ToolPak

This article discusses a heart rate t-test analysis using MS Excel Analysis ToolPak add-in. This is based on real data obtained in a personally applied aerobics training program.

Do you know that there is a powerful statistical software residing in the common spreadsheet software that you use everyday or most of the time? If you have installed Microsoft Excel in your computer, chances are, you have not activated a very useful add-in: the Data Analysis ToolPak.

See how MS Excel’s data analysis function was used in analyzing real data on the effect of aerobics on the author’s heart rate.

Statistical Analysis Function of MS Excel

Many students, and even teachers or professors, are not aware that there is a powerful statistical software at their disposal in their everyday interaction with Microsoft Excel. In order to make use of this nifty tool that the not-so-discerning fail to discover, you will need to install it as an Add-in to your existing MS Excel installation. Make sure you have placed your original MS Office DVD in your DVD drive when you do the next steps.

You can activate the Data Analysis ToolPak by following the procedure below (this could vary between versions of MS Excel; this one’s for MS Office 2007):

1. Open MS Excel,
2. Click on the Office Button (that round thing at the uppermost left of the spreadsheet),
3. Look for the Excel Options menu at the bottom right of the box and click it,
5. Click on the line Analysis ToolPak,
6. Choose Excel Add-in in the Manage field below left, then hit Go, and
7. Check the Analysis ToolPak box then click Ok.

Using the Data Analysis ToolPak to Analyze Heart Rate Data

The aim of this statistical analysis is to test whether there’s really a significant difference in my heart rate eight months ago and last week. This is because in my earlier post titled How to Slow Down Your Heart Rate Through Aerobics, I mentioned that my heart rate is getting slower through time because of aerobics training. But I used the graphical method to plot a trend line. I did not test whether there is a significant difference in my heart rate or not, from the time I started measuring my heart rate compared to the last six weeks’ data.

Now, I would like to answer the question is: “Is there a significant difference in heart rate eight months ago and last six week’s record?”

Student’s t-test will be used to analyze 18 readings taken eight months ago and the last six weeks as data for comparison. I measured my heart rate upon waking up (that ensures I am rested) during each of my three-times a week aerobics sessions.

Why 18? According to Dr. Cooper, the training effect accorded by aerobics could be achieved within six weeks, so I thought my heart rate within six weeks should not change significantly. So that’s six weeks times three equals 18 readings.

Eight months would be a sufficient time to effect a change in my heart rate since I started aerobic running eight months ago. And the trend line in the graph I previously presented shows that my heart rate slows down through time.

These are the assumptions of this t-test analysis and the reason for choosing the sample size.

The Importance of an F-test

Before applying the t-test, the first test you should do to avoid a spurious or false conclusion is to test whether the two groups of data have a different variance. Does one group of data vary more than the other? If they do, then you should not use the t-test. Nonparametric methods such as Mann-Whitney U test should be used instead.

How do you make sure that this may not be the case, that is, that one group of data varies more than the other? The common test to use is an F-test. If no significant difference is detected, then you can go ahead with the t-test.

Here’s an output of the F-test using the Analysis ToolPak of MS Excel:

Notice that the p-value for the test is 0.36 [from P(F<=f) one-tail]. This means that one group of data does not vary more than the other.

How do you know that the difference in variance in the two groups of data using the F-test analysis is not significant? Just look at the p-value of the data analysis output and see whether it is equal to or below 0.05. If it is 0.06 or higher, then the difference in variance is not significant and t-test could now be used.

This result signals me to go on with the t-test analysis. Notice that the mean heart rate during the last six weeks (i.e., 50.28) is lower than that obtained six months ago (i.e. 53.78). Is this really significant?

Result of the t-test

I had run a consistent 30-points per week last August and September 2013 but now I accumulate at least a 50-point week for the last six weeks. This means that I almost doubled my capacity to run. And I should have a significantly lower heart rate than before. In fact, I felt that I can run more than my usual 4 miles and I did run more than 6 miles once a week for the last six weeks.

Below is the output of the t-test analysis using the Analysis ToolPak of MS Excel:

The data shows that there is a significant difference between my heart rate eight months ago and the last three weeks. Why? That’s because the p-value is lower than 0.05 [i.e., P(T<=t) two-tail = 0.0073]. There’s a remote possibility that there is no difference in heart rate 8 months ago and the last six weeks.

I ignored the other p-value because it is one-tail. I just tested whether there is a significant difference or not. But because the p-value in one-tail is also significant, I can confidently say that indeed I have obtained sufficient evidence that aerobics training had slowed down my heart rate, from 54 to 50. Four beats in eight months? That’s amazing. I wonder what will be the lowest heart rate I could achieve with constant training.

This analysis is only true for my case as I used my set of data; but it is possible that the same results could be obtained for a greater number of people.

© 2014 April 28 P. A. Regoniel

Categories

Must-Visit Sites for Statistics

What is statistics? The term can loosely be defined as mathematics that involve the collecting, interpreting, analyzing and presenting of a large amount of numerical data. Simply put, it is the processing of a great deal of information in a way that presents a complete picture of a particular subject. For instance, the United States Bureau of Economic Analysis, or BEA, is responsible for using statistical information to determine, among other things, the country’s gross domestic product. Stats show the GDP being increased by about 2.5%.

While that may be interesting for some, the average citizen may shy away from number crunching. Still, there are some great resources on the Web that can demonstrate just how important, useful and fun statistics can be.

Population Information

Whenever considering moving to a new location, there are some statistics to always be aware of. City-Data.com is a very popular website that can shed light on statistical information related to the population, average income, rent, etc. of various areas. If you’re looking to take population stats global, check out GeoHive.com. Here you learn that the United States is ranked as the third most populated country on Earth. The United States has roughly less than a third of the population of India, which is now ranked second while China is the most populated country on the planet.

If you’re looking to check these numbers in real-time, you should visit worldometers.info. Track global births, deaths, book titles published, money spent on video games (over \$115,000,000 so far today!) and so much more.

The Census Bureau is also an ideal and reliable source for information about population.

On Crime and Safety

Few statistics are as compelling to the public as those determining the crime rate and safety of different locations. While the previously mentioned City-Data has a section of its site devoted to this, a statistics site more specific to crime is the popular CrimeReports.com. This site reveals the data regarding different crimes in a particular location over a period of time. CrimeMapping.com relies a bit more on reports through local police departments, making its data less detailed. However, the information that is available is very illuminating.

If you’re concerned about safety as it relates to flying, there are stats at Skybrary.aero that might interest you. As for determining which states are safest according to the number of occupation-related injuries and fatalities, the United States Bureau of Labor Statistics provides these numbers for each state.

Sports Related

These statistics could be considered less serious to anyone who isn’t a sports fan. However, when you really want to know how your favorite player or team is doing, they can be very relevant. Baseball statistics are among the most well-known sports statistics, and the official mlb.mlb.com site features sortable baseball statistics. For more general sports stats, OptaSports.com is ideal.

Whenever election time rolls around, statistics representing the chances of those running for office are everywhere. Over the past couple of presidential election cycles, Fivethirtyeight.com has come to the foreground due to the brilliant use of statistics to predict election results with startling accuracy.

Fun Statistic Sites to Visit

If you’re not looking for any particular statistical information, but would love some fun statistic sites to check out, be sure to visit Gapminder.org, StatisticBrain.com or MathIsFun.com. There are even statistics games available for kids at onlinemathlearning.com.

No matter what the topic is, it is likely to have statistics of some sort related to it. Whether you are looking for fun, a school project, or creating an info graphic or report for work, these sites can help you find the right stats. Number crunching can seem daunting, but statistics often provides a convenient way to learn more about our world and each other.

Categories

Can You Measure Love?

Can statistics hope to measure love as a concept? Here is an attempt to measure an abstract concept that appears to elude empirical analysis.

While teaching statistics and dealing with the different types of measurement scales, discussions on what can be measured and what cannot always crop up. One of those classic examples on non-measurable variables mentioned even in references are abstract concepts such as love.

But will love really not lend itself to measurement? Just for discussion purposes, I present the different degrees of love and offer a conceptual framework to represent this contention.

Conceptual Framework for the Different Levels of Love

If we look closely at how love is being described in our readings, experiences, and people’s perspectives, there actually are different levels of love. It can be measured in a way, but not really as exact as a ratio or interval scale but an ordinal one at that.

Just like Maslow’s hierarchy of needs, love can be a very basic need to ensure human survival such as physical love, transcending to emotional and moving up the ladder to the highest form of spiritual love described in literature and wisdom of the ages. I represent the love concept as a measurable variable in the figure below.

Kinds and Degree of Human Love

Everybody or most of the people get the opportunity to experience any or all the possibilities of love as represented in the figure presented above. Here is a description of the concepts included in the conceptual framework.

1. Puppy love

This is the kind of love used to describe that surge of emotions felt by a young person having a “crush” to the opposite sex. This usually happens during childhood or adolescent years characterized by a fleeting affection. This feeling is short-term thus is termed simple infatuation.

2. Eros

Still taking off from Maslow’s level of hierarchy of needs, eros lies at the level of the physical plane. According to psychoanalytic theory, its main purpose is self-preservation, pleasure and procreation as a group.

3. Instinctual Love or Love by Instinct

Instinctual love refers to motherly or defensive care for the young. Although animals show this instinct, the tender loving care of mothers to their children is classified as a kind of love that not only relates to defense but affectionate nurture of children. Just like eros, this is still a part of ensuring species survival.

4. Platonic

According to dictionary.com, platonic love is an intimate companionship or relationship, especially between two persons of the opposite sex, characterized by the absence of sexual involvement. It transcends the boundaries of the physical plane; still emotional, but approaching the spiritual plane.

5. Phili

Love for fellow men as brothers or sisters lies more in the spiritual, rather than emotional realm. Feeling compassion for other people who are in need means giving up one’s own physical comfort and reaching out to give a helping hand.

6. Agape

No other love is greater than what Christ demonstrated. It is used to describe the love of Christ to humankind. As John 15:13 put it, “Greater love has no one than this, that one lay down his life for his friends.”

The kinds of love described here are not mutually exclusive; meaning, whatever level of love a person is in, it could progress or shift. This just means that love has no boundaries and that no words can really, truly explain it. But of course, no one could hope to match the greatest love of all.

© 2013 October 25 P. A. Regoniel

Categories

What are the Psychometric Properties of a Research Instrument?

Here is a differentiation of reliability and validity as applied to the preparation of research instruments.

One of the most difficult parts in research writing is when the instrument’s psychometric properties are scrutinized or questioned by your panel of examiners. Psychometric properties may sound new to you, but they are not actually new.

In simple words, psychometric properties refer to the reliability and validity of the instrument. So, what is the difference between the two?

Reliability refers to the consistency while validity refers to the test results’ accuracy. An instrument should accurately and dependably measure what it ought to measure. Its reliability can help you have a valid assessment; its validity can make you confident in making a prediction.

Instrument’s Reliability

How can you say that your instrument is reliable? Although there are many types of reliability tests, what is more usually looked at is the internal consistency of the test. When presenting the results of your research, your panel of examiners might look for the results of the Cronbach’s alpha or the Kuder-Richardson Formula 20 computations. If you cannot do the analysis by yourself, you may ask a statistician to help you process and analyze data using a reliable statistical software application.

But if your intention is to determine the inter-correlations of the items in the instrument and if these items measure the same construct, Cronbach’s alpha is suggested. According to David Kingsbury, a construct is the behavior or outcome a researcher seeks to measure in the study. This is often revealed by the independent variable.

When the inter-correlations of the items increase, the Cronbach’s alpha generally increases as well. The table below shows the range of values of Cronbach’s alpha and the corresponding descriptions on internal consistency.

(Note: The description is not officially cited and taken only from Wikipedia, but you may confer with your statistician and your panel of examiners. If the value of alpha is less than .05, the items are considered poor and must be omitted).

Instrument’s Validity

There are many types of validity measures. One of the most commonly used is the construct validity. Thus, the construct or the independent variable must be accurately defined.

To illustrate, if the independent variable is the school principals’ leadership style, the sub-scales of that construct are the types of leadership style such as authoritative, delegative and participative.

The construct validity would determine if the items being used in the instrument have good validity measures using factor analysis and each sub-scale has a good inter-item correlation using Bivariate Correlation. The items are considered good if the p-value is less than 0.05.

References:

1. Kingsbury, D. (2012). How to validate a research instrument. Retrieved October 16, 2013, from http://www.ehow.com/how_2277596_validate-research-instrument.html

2. Grindstaff, T. (n.d.). The reliability & validity of psychological tests. Retrieved October 16, 2013, from http://www.ehow.com/facts_7282618_reliability-validity-psychological-tests.html

3. Renata, R. (2013). The real difference between reliability and validity. http://www.ehow.com/info_8481668_real-difference-between-reliability-validity.html

4. Cronbach’s alpha. Retrieved October 17, 2013, from http://en.wikipedia.org/wiki/Cronbach%27s_alpha

© 2013 October 17 M. G. Alvior

Categories

Example of a Research Question and Its Corresponding Statistical Analysis

How should a research question be written in such a way that the corresponding statistical analysis is figured out? Here is an illustrative example.

One of the difficulties encountered by my graduate students in statistics is how to frame questions in such a way that those questions will lend themselves to appropriate statistical analysis. They are particularly confused on how to write questions for test of difference or correlation. This article deals with the former.

How should the research questions be written and what are the corresponding statistical tools to use? This question is a challenge to someone just trying to understand how statistics work; with practice and persistent study, it becomes an easy task.

There are proper ways on how to do this; but you need to have a good grasp of the statistical tools available, at least the basic ones, to match the research questions or vice-versa. To demonstrate the concept, let’s look at the common ones, that is, those involving difference between two groups.

Example Research Question to Test for Significant Difference

Let’s take an example related to education as the focus of the research question. Say, a teacher wants to know if there is a difference between the academic performance of pupils who have had early exposure in Mathematics and pupils without such exposure. Academic performance is still a broad measure, so let’s make it more specific. We’ll take summative test score in Mathematics as the variable in focus. Early exposure in Mathematics means the child played games that are Mathematics-oriented in their pre-school years.

To test for difference in performance, that is, after random selection of students with about equal aptitudes, the same grade level, the same Math teacher, among others; the research question that will lend itself to analysis can be written thus:

1. Is there a significant difference between the Mathematics test score of pupils who have had early Mathematics exposure and those pupils without?

Notice that the question specifies a comparison of two groups of pupils: 1) those who have had early Mathematics exposure, and, 2) those without. The Mathematics summative test score is the variable to compare.

Statistical Tests for Difference

What then should be the appropriate statistical test in the case described above? Two things must be considered: 1) sampling procedure, and 2) number of samples.

If the researcher is confident that he has sampled randomly and that the sample approaches a normal distribution, then a t-test is appropriate to test for difference. If the researcher is not confident that the sampling is random, or, that there are only few samples available for analysis and most likely the population approximates a non-normal distribution, Mann-Whitney U test is the appropriate test for difference. The first test is a parametric test while the latter is a non-parametric test. The nonparametric test is distribution-free, meaning, it doesn’t matter if your population exhibits a normal distribution or not. Nonparametric tests are best used in exploratory studies.

A random distribution is achieved if a lot of samples are used in the analysis. Many statisticians believe this is achieved with 200 cases, but this ultimately depends on the variability of the measure. The greater the variability, the greater the number required to produce a normal distribution.

A quick inspection of the distribution is made using a graph of the measurements, i.e., the Mathematics test score of pupils who have had early Mathematics exposure and those without. If the scores are well-distributed with most of the measures at the center tapering at both ends in a symmetrical manner, then it approximates a normal distribution (Figure 1).

If the distribution is non-normal or if you notice that the graph is skewed to the left or to the right (leans either to the left or to the right), then you will have to use a non-parametric test. A skewed distribution means that most students have low scores or most of them have high scores. This means that you favor selection of a certain group of pupils. Each pupil did not have an equal chance of being selected. This violates the normality requirement of parametric tests such as the t-test although it is robust enough to accommodate skewness to a certain degree. F-test may be used to determine the normality of a distribution.

Writing the Conclusion Based on the Statistical Analysis

Now, how do you write the results of the analysis? If it was found out in the above statistical analysis that there is a significant difference between pupils who have had Mathematics exposure early in life compared to those who did not, the statement of the findings should be written this way:

The data presents sufficient evidence that there is a significant difference in the Mathematics test score of pupils who have had early Mathematics exposure compared to those without.

It can be written in another way, thus:

There is reason to believe that the Mathematics test score of pupils who have had early Mathematics exposure is different from those without.

Do not say, it was proven that… Nobody is 100% sure that this conclusion will always be correct. There will always be errors involved. Science is not foolproof. There are always other possibilities.

© 2013 October 12 P. A. Regoniel

Categories

An Introduction to Multiple Regression

What is multiple regression? When is it used? How is it computed? This article expounds on these questions.

Multiple regression is a commonly used statistical tool that has a range of applications. It is most useful in making predictions of the behavior of a dependent variable using a set of related factors or independent variables. It is one of the many multivariate (many variables) statistical tools applied in a variety of fields.

Origin of Multiple Regression

Multiple regression originated from the work of Sir Francis Galton, an Englishman who pioneered eugenics, a philosophy that advocates reproduction of desirable traits. In his study of sweet peas, an experimental plant popular among scientists like Gregor Mendel because it is easy to cultivate and has a short life span, Galton proposed that a characteristic (or variable) may be influenced, not by a single important cause but by a multitude of causes of greater and lesser importance. His work was further developed by English mathematician Karl Pearson, who employed a rigorous mathematical treatment of his findings.

When do you use multiple regression?

Multiple regression is used appropriately on those occasions where only one dependent variable (denoted by the letter Y) is correlated with two or more independent variables (denoted by Xn). It is used to assess causal linkages and predict outcomes.

For example, a student’s grade in college as the dependent variable of a study can be predicted by the following variables: high school grade, college entrance examination score, study time, sports involvement, number of absences, hours of sleep, time spent viewing the television, among others. The computation of the multiple regression equation will show which of the independent variables have more influence than the others.

How is a multiple regression equation computed?

The data in calculating multiple regression formula take the form of ratio and interval variables (see four statistical measures of measurement for a detailed description of variables). When data are in the form of categories, dummy variables are used instead because the computer cannot interpret those data. Dummy variables are numbers representing a categorical variable. For example, when gender is included in the multiple regression analysis, these are encoded as 1 to represent a male subject and 0 to represent a female or vice-versa.

If several independent variables are involved in the investigation, manual computation will be tedious and time-consuming. For this reason, statistical softwares like SPSS, Statistica, Minitab, Systat, and even MS Excel are used to correlate a set of independent variables to the dependent variable. The data analyst will just have to encode data into columns of categories for each sample which will occupy one row in a spreadsheet.

The formula used in multiple regression analysis is given below:

Y = a + b1*X1 + b2*X2 + … + bn*Xn

where a is the intercept, b is the beta coefficient, and X is an independent variable.

From the set of variables initially incorporated in the multiple regression equation, a set of significant predictors can be identified. This means that some of the independent variables will have to be eliminated in the multiple regression equation if they are found to exert minimal or insignificant correlation to the dependent variable. Thus, it is good practice to make an exhaustive review of literature first to avoid including variables which have consistently shown no correlation to the dependent variable being investigated.

How do you write the multiple regression hypothesis?

For the example given above, you can state the multiple regression hypothesis this way:

There is no significant relationship between a student’s grade in college and the following:

2. college entrance examination score,
3. study time,
4. sports involvement,
5. number of absences,
6. hours of sleep, and
7. time spent viewing the television.

All of these variables should be quantified to facilitate encoding and computation.

For more practical tips, an example of applied multiple regression is given here.

© 2013 September 9 P. A. Regoniel

Categories

Big Data Analytics and Executive Decision Making

What is big data analytics? How can the process support decision-making? How does it work? This article addresses these questions.

The Meaning of Big Data Analytics

Statistics is a powerful tool that large businesses use to further their agenda. The age of information presents opportunities to dabble with voluminous data generated from the internet or other electronic data capture systems to output information useful for decision-making. The process of analyzing these large volumes of data is referred to as big data analytics.

What Can be Gained from Big Data Analytics?

How will data gathered from the internet or electronic data capture systems be that useful to decision makers? Of what use are those data?

From a statistician’s or data analyst’s point of view, the great amounts of data available for analysis means a lot of things. However, analysis can be made meaningful when guided by specific questions at the beginning of the analysis. Data remain as data unless their collection was designed to meet a stated goal or purpose.

However, when large amounts of data are collected using a wide range of variables or parameters, it is still possible to analyze those data to see relationships, trends, differences, among others. Large databases serve this purpose. They are ‘mined’ to produce information. Hence, the term ‘data mining’ arose from this practice.

In this discussion, emphasis is given on the information provided by data for effective executive decision-making.

Example of the Uses of Big Data Analytics

An executive of a large, multinational company may, for example, ask three questions:

1. What is the sales trend of the company’s products?
2. Do sales approach a predetermined target?
3. What is the company’s share of the total product sales in the market?

What kind of information does the executive need and why is he asking such questions? Executives expect aggregated information or a bird’s eye view of the situation.

Sales trend can easily be made by preparing a simple line graph to show products sales since the launching of that product. Just by simple inspection of the graph, an executive can easily see the ups and downs of product sales. If there are three products presented at the same time, it would be easy to spot which one performs better than the others. If the sales trend dipped somewhere, the executive may ask what caused such dip in sales.

Hence, action may be applied to correct the situation. A sudden surge in sales may be attributed to an effective information campaign.

How about that question on meeting a predetermined target? A simple comparison of unit sales using a bar graph showing targeted and actual accomplishments achieves this end.

The third question may be addressed by showing a pie-chart to show the percentage of product sales relative to those of the other companies. Thus, information on the company’s competitiveness is produced.

These graph outputs, if based on large amounts of data, is more reliable than just simply getting randomly sampled data because there is an inherent error associated with sampling. Samples may not correctly reflect a population. Greater confidence in decision-making, therefore, is given to such analysis backed by large volumes of data.

Data Sources for Big Data Analytics

How are a large amount of data amassed for analytics?

Whenever you subscribe, log-in, join, or make use of any internet service like a social network or an email service for free, you become a part of the statistics. Simply opening your email and clicking products displayed in a web page will provide information on your preference. The data analyst can relate your preference to the profile you gave when you decided to subscribe to a service. But your preference is only a point in the correlation analysis. More data is required for analysis to take place. Hence, aggregating all the behavior of internet users will provide better generalizations.

Conclusion

This discussion highlights the importance of big data analytics. When it becomes a part of an organization’s decision support system, better decision-making by executives is achieved.

Reference

TimeAtlas.com (August 23, 2011). Web server logs and internet privacy. Retrieved August 28, 2013, from http://www.timeatlas.com/web_sites/general/web_server_logs_and_internet_privacy#.Uh1Dbb8W3Zh

© 2013 August 28 P. A. Regoniel

Categories

Examples for Research Design Development

How do you come up with your research design? Here are two examples of blood pressure exploratory studies as leads toward research design development.

Blood Pressure Exploratory Study

I find the practical aspects of applying research enjoyable and designed experiments to uncover some relationships or to resolve my problem.

Several years ago, I convinced my doctor to cut my blood pressure drug maintenance. I simply presented to him a graph I prepared using a spreadsheet application and an Analysis of Variance (ANOVA) to compare my blood pressure readings with the full dosage of the drug, half of it, and a fourth of it. I also compared the two groups at a time using t-test and I got the same results. The graph and the statistical analysis showed my blood pressure readings did not show a significant difference as I gradually reduced the dosage of the prescribed drug.

The Diet Experiment

The primary purpose of the above experiment is to see whether diet can produce the same results as a drug in lowering blood pressure. One of the active components of the drug in question was potassium. I thought it is better to take natural food to get the mineral. I computed the amount of potassium that corresponds to the dosage by eating a number of potatoes that will supply such amount and gradually reduced the drug dosage while monitoring my blood pressure daily.

From my musings on the nutritional value of potato, it would take about two to three medium-sized potatoes to get the same amount of potassium. So, I took at least three potatoes a day to correspond to the amount of potassium required as I reduced on the drug dosage.

When I came around 1/4 of the prescribed dose, I asked the doctor if I can forego the drug because I suspect it is one of the reasons why I was feeling weak. And I confirmed it by finding information on the side effects of the drug. The doctor was amazed because I reduced the drug to such a small amount that, according to him does not anymore provide substantial benefit in lowering blood pressure. My blood pressure has stabilized. He said I can forgo the drug and return to him when my blood pressure rises again.

Despite this apparent success in my desire to live without the drug in my system, I do not recommend this approach to anybody because it might work differently to different people. I take certain precautions when I do conduct studies on myself.

Blood Pressure and Exercise Experiment

Recently, I’m at it again. This time, I just would like to verify if indeed exercise provides the benefit of lowering blood pressure. My readings say so and I would like to personally find out what the numbers will show. I monitored by blood pressure before exercise, right after exercise and 15 minutes or more after my exercise so that my blood pressure will stabilize at rest.

I just started this last week and saw a trend just from three readings. I show the results of the blood pressure monitoring in the table below.

The results are interesting because obviously, my blood pressure went down right after exercise and dropped more 15 minutes after. Upon waking up, my blood pressure readings show that my systolic reading  indicating the maximum arterial pressure is higher than the normal 120mm Hg but the systolic reading is normal. After exercise, the systolic pressure reduced greatly just by visual inspection even while the heartbeat is high. After 15 minutes, the systolic and diastolic readings even went down further while my heartbeat approximates its normal value.

So are the results conclusive enough that exercise lowers blood pressure? There is no doubt exercise lowers blood pressure[1] but I have not seen details on how much blood pressure is reduced by exercise. This data informs me right away the benefits of exercise and serve as an encouragement to engage and maintain my exercise routine.

Today, when I went for my usual six kilometer run in 41 minutes and 38 seconds (the fastest so far in that distance), my blood pressure after exercise approximates the previous values. It is 104/65 with a heartbeat of 95. Again, after my heartbeat stabilized 15 minutes later at 64, while my corresponding blood pressure is 101/59.

Data Collection Procedure for the Exercise Experiment

How did I come up with such values? What is the data collection procedure? I collected this data systematically, making consistent readings as much as I can. Roughly, the data collection procedure goes this way and can be replicated by anybody.

Record BP and heartbeat –> stretching exercise for five minutes –> slow walk of 8 minutes –> run proper –> cooling down with a slow walk for about 15 minutes –> five-minute stretching –> record BP and heartbeat –> rest for 15 minutes –> record BP and heartbeat

I used an Omron Automatic Wrist Blood Pressure Monitor in making blood pressure readings. It read a little higher than the standard sphygmomanometer but it reads consistently. So it will be easy to calibrate it for more standardized results.

From this exploratory study which confirms the benefits of exercise in a quantitative way, a research design may be developed for more rigorous analysis. You should notice, however, that sleep may also have an influence on blood pressure readings so I marked the third reading with an asterisk. The quality of my sleep in the first two readings is not that good as I only had six or less hours of sleep while on the third reading, I got quality sleep of seven hours or more. This apparently resulted to lower blood pressure readings upon waking up.

This means that if I pursue this experiment, I should make my measurements consistent and consider the hours of sleep and factor it in for analysis. I should also make sure that monitoring time should be the same all throughout the duration of the study.

Now, the question is: “Are there studies conducted like this before?” I actually don’t know as in truth I am not a medical researcher. At best, my experience is only a case study; a description of my case. But a review of literature will tell me if a similar experiment was done by anyone on a greater number of people. These experiments are guided by different theories on the effects of exercise to health developed through time. In my case, I just did it out of mere curiosity to verify my readings which are also backed by theories.

From these initial data, an experimental research design may be developed to ensure that the evidence obtained answers the questions initially posed for the study. Two questions were posed in these two examples: 1) Can a well-planned diet produce the same results as a drug in lowering blood pressure?, and 2) Does exercise lower blood pressure?

From simple case studies like these, experiments may be designed to test if the findings are consistent for a greater number of people. This will also provide insights on which variables should be included for analysis.

Reference

1. Mayo Clinic (n.d.). Exercise: A drug-free approach to lowering high blood pressure. Retrieved August 27, 2013, from http://www.mayoclinic.com/health/high-blood-pressure/HI00024

© 2013 August 26 P. A. Regoniel

Categories

What is a Model?

In the research and statistics context, what does the term model mean? This article defines what is a model, poses guide questions on how to create one and provides simple examples to clarify points arising from those questions.

One of the interesting things that I particularly like in statistics is the prospect of being able to predict an outcome (referred to as the independent variable) from a set of factors (referred to as the independent variables). A multiple regression equation or a model derived from a set of interrelated variables achieves this end.

The usefulness of a model is determined by how well it is able to predict the behavior of dependent variables from a set of independent variables. To clarify the concept, I will describe here an example of a research activity that aimed to develop a multiple regression model from both secondary and primary data sources.

What is a Model?

Before anything else, it is always good practice to define what we mean here by a model. A model, in the context of research as well as statistics, is a representation of reality using variables that somehow relate with each other. I italicize the word “somehow” here being reminded of the possibility of correlation between variables when in fact there is no logical connection between them.

A classic example given to illustrate nonsensical correlation is the high correlation between length of hair and height. It was found out in a study that if a person has short hair, that person tends to be tall and vice-versa.

Actually, the conclusion of that study is spurious because there is no real correlation between length of hair and height. It so happened that men usually have short hair while women have long hair. Men, in general, are taller than women. The true variable behind that really determines height is the sex or gender of the individual, not length of hair.

At best, the model is only an approximation of the likely outcome of things because there will always be errors involved in the course of building it. This is the reason why scientists adopt a five percent error standard in making conclusions from statistical computations. There is no such thing as absolute certainty in predicting the probability of a phenomenon.

Things Needed to Construct A Model

In developing a multiple regression model which will be fully described here, you will need to have a clear idea of the following:

1. What is your intention or reason in constructing the model?
2. What is the time frame and unit of your analysis?
3. What has been done so far in line with the model that you intend to construct?
4. What variables would you like to include in your model?
5. How would you ensure that your model has predictive value?

These questions will guide you towards developing a model that will help you achieve your goal. I explain in detail the expected answers to the above questions. Examples are provided to further clarify the points.

Purpose in Constructing the Model

Why would you like to have a model in the first place? What would you like to get from it? The objectives of your research, therefore, should be clear enough so that you can derive full benefit from it.

In this particular case where I sought to develop a model, the main purpose is to be able to determine the predictors of the number of published papers produced by the faculty in the university. The major question, therefore, is:

“What are the crucial factors that will motivate the faculty members to engage in research and publish research papers?”

Once a research director of the university, I figured out that the best way to increase the number of research publications is to zero in on those variables that really matter. There are so many variables that will influence the turnout of publications, but which ones do really matter? A certain number of research publications is required each year, so what should the interventions be to reach those targets?

Time Frame and Unit of Analysis

You should have a specific time frame on which you should base your analysis from. There are many considerations in selecting the time frame of the analysis but of foremost importance is the availability of data. For established universities with consistent data collection fields, this poses no problem. But for struggling universities without an established database, it will be much more challenging.

Why do I say consistent data collection fields? If you want to see trends, then the same data must be collected in a series through time. What do I mean by this?

In the particular case I mentioned, i. e., number of publications, one of the suspected predictors is the amount of time spent by the faculty in administrative work. In a 40-hour work week, how much time do they spend in designated posts such as unit head, department head, or dean? This variable which is a unit of analysis, therefore, should be consistently monitored every semester, for many years for possible correlation with the number of publications.

How many years should these data be collected? From what I collect, peer-reviewed publications can be produced normally from two to three years. Hence, the study must cover at least three years of data to be able to log the number of publications produced. That is, if no systematic data collection was made to supply data needed by the study.

If data was systematically collected, you can backtrack and get data for as long as you want. It is even possible to compare publication performance before and after a research policy was implemented in the university.

Review of Literature

You might be guilty of “reinventing the wheel” if you did not take time to review published literature on your specific research concern. Reinventing the wheel means you duplicate the work of others. It is possible that other researchers have already satisfactorily studied the area you are trying to clarify issues on. For this reason, an exhaustive review of literature will enhance the quality and predictive value of your model.

For the model I attempted to make on the number of publications made by the faculty, I bumped on a summary of the predictors made by Bland et al.[1] based on a considerable number of published papers. Below is the model they prepared to sum up the findings.

Bland and colleagues found that three major areas determine research productivity namely, 1) the individual’s characteristics, 2) institutional characteristics, and 3) leadership characteristics. This just means that you cannot just threaten the faculty with the so-called publish and perish policy if the required institutional resources are absent and/or leadership quality is poor.

Select the Variables for Study

The model given by Bland and colleagues in the figure above is still too general to allow statistical analysis to take place. For example, in individual characteristics, how can socialization as a variable be measured? How about motivation?

This requires you to further delve on literature on how to properly measure socialization and motivation, among other variables you are interested in. The dependent variable I chose to reflect productivity in a recent study I conducted with students is the number of total publications, whether these are peer-reviewed or not.

Ensuring the Predictive Value of the Model

The predictive value of a model depends on the degree of influence of a set of predictor variables on the dependent variable. How do you determine the degree of influence of these variables?

In Bland’s model, all the variables associated with those concepts identified may be included in analyzing data. But of course, this will be costly and time consuming as there are a lot of variables to consider. Besides, the greater the number of variables you included in your analysis, the more samples you will need to obtain a good correlation between the predictor variables and the dependent variable.

Stevens[2] recommends a nominal number of 15 cases for one predictor variable. This means that if you want to study 10 variables, you will need at least 150 cases to make your multiple regression model valid in some sense. But of course, the more samples you have, the greater the certainty in predicting outcomes.

Once you have decided on the number of variables you intend to incorporate in your multiple regression model, you will then be able to input your data on a spreadsheet or a statistical software such as SPSS, Statistica, or related software applications. The software application will automatically produce the results for you.

The next concern is how to interpret the results of a model such as the results of a multiple regression analysisl. I will consider this topic in my upcoming posts.

Note

A model is only as good as the data used to create it. You must therefore make sure that your data is accurate and reliable for better predictive outcomes.

References:

1. Bland, C.J., Center, B.A., Finstad, D.A., Risbey, K.R., and J. G. Staples. (2005). A Theoretical, Practical, Predictive Model of Faculty and Department Research Productivity. Academic Medicine, Vol. 80, No. 3, 225-237.
2. Stevens, J. 2002. Applied multivariate statistics for the social sciences, 3rd ed. New Jersey: Lawrence Erlbaum Publishers. p. 72.
Categories

Simplified Explanation of Probability in Statistics

Do you have trouble understanding the concept of probability? Do you ask yourself why you have to read that section on probability in your statistics book that seems to have no bearing on your research? Don’t despair. Read the following article and have a clear understanding of this concept that you will find very useful in your research venture.

One of the topics in the Statistics course that students had difficulty understanding is the concept of probability. But is “probability” really a difficult thing to understand? In reality, it is not that difficult as long as you gain understanding on how it works when trying to compare differences or correlations between variables.

It simply works this way:

The classic example to illustrate probability is demonstrated using a coin. Everybody knows that a coin has two sides: the head, which normally has face of someone on it with the corresponding amount it represents or the tail, which typically shows the government bank which issued the currency.

Now, if you flick the coin, it will land and settle with one side up; unless you get a weird result that the coin unexpectedly landed on its edge or in-between the head and tail sides! (see Fig. 1). This, however, could be a possibility as there is a middle ground that will make this possible though very, very remote (what if the government decides to have a coin thick enough to make this possible if ever you flick a coin?). I just included this because it so happened I flicked a coin before and it landed next to an object that made it stand on its edge instead of falling on either the head or the tail side. That just means that unexpected things could happen given the right circumstances that will make it possible.

I just have to illustrate this with a picture because some students do not understand what is a head and what is a tail in a coin. So, no excuses for not understanding what we are talking about here.

For our purpose, we’ll just leave the in-between possibility and just concentrate on either the possibility of getting a head or a tail when a coin is flipped and allowed to settle on level ground or on top of your palm. Since there are only two possibilities here, we can then say that there is a 50-50, 0.5 or 1/2 possibility that the coin will land as head or tail. If we would like to represent this as a symbol in statistics to show this possibility, it is written thus:

p = 0.5

where p is the probability symbol and the value 0.5 is the estimated outcome that the coin will land on either the head or the tail. Alternatively, this can be stated that there is an equal chance that you will get a head or a tail in a series of tossing a coin and letting it land on level ground.

Therefore, if you toss a coin 10 times, the probability of getting either a head or a tail is 50%, 0.05 or 1/2. That means in 10 tosses, there will likely be 5 heads and 5 tails. If you toss it 100 times, you will likely get 50 heads and 50 tails.

If you have a six-sided dice, then the probability of each side in each throw is 1/6. If you have a cube, then the probability of each side is 1/4.

Application

This background knowledge can help you understand the importance of the p-value in statistical tests.

For example, if you are interested in knowing if a significant difference between two sets of variables exists (say a comparison of the test scores of a group of students who were given remedial classes as opposed to another group that did not undergo remedial classes), and a statistical software was used to analyze the data (presumably a t-test was applied), you just have to look at the p-value to find out if indeed there is a significant difference in achievement between the two groups. If the p-value is 0.05 or lower than that, then you can safely say that there is sufficient evidence that students who underwent remedial classes performed better (in terms of their test scores) than those who did not undergo remedial classes.

For clarity, here are the null and alternative hypotheses that you can formulate for this study:

Null Hypothesis: There is no significant difference between the test scores of students who took remedial classes and students who did not take remedial classes.

Alternative Hypothesis: There is a significant difference between the test scores of students who took remedial classes and students who did not take remedial classes.

The p-value simply means that there is a 5% probability, possibility or chance that students who were given remedial classes perform similarly with those who were not given remedial classes. This probability is quite low, such that you may reject your null hypothesis that there is no difference in test scores of students with or without remedial classes. If you reject the null hypothesis, then you should accept your alternative hypothesis which is: There is a significant difference between the test scores of students who took remedial classes and students who did not take remedial classes.

Of what use is this finding then? The results show that indeed, giving remedial classes can provide benefit to students. As the results of the study indicated, it can significantly increase the student’s test scores.

You may then present the results of your study and confidently recommend that remedial classes be given to students to help improve their test scores in whatever subject that may be.

That’s how statistics work in research.