Quantcast
Channel: Statistics | Minitab
Viewing all 143 articles
Browse latest View live

Using Multivariate Statistical Tools to Analyze Customer and Survey Data

$
0
0

Businesses are getting more and more data from existing and potential customers: whenever we click on a web site, for example, it can be recorded in the vendor's database. And whenever we use electronic ID cards to access public transportation or other services, our movements across the city may be analyzed.

In the very near future, connected objects such as cars and electrical appliances will continuously generate data that will provide useful insights regarding user preferences, personal habits, and more. Companies will learn a lot from users and the way their products are being used. This learning process will help them focus on particular niches and improve their products according to customer expectations and profiles.

For example, insurance companies will monitor how motorists are driving connected cars, to adjust insurance premiums according to perceived risks, or to analyze driving behaviors so they can advise motorists how to boost fuel efficiency. No formal survey will be needed, because customers will be continuously surveyed.

Let's look at some statistical tools we can use to create and analyze user profiles, map expectations, study which expectations are related, and so on. I will focus on multivariate tools, which are very efficient methods for analyzing surveys and taking into account a large number of variables. My objective is to provide a very high level, general overview of the statistical tools that may be used to analyze such survey data.

A Simple Example of Multivariate Analysis

Let us start with a very simple example. The table below presents data some customers have shared about their enjoyment of specific types of food :

A simple look at the table does not really help us easily understand preferences. So we can use Simple Correspondence Analysis, a statistical multivariate tool, has been used to visually display expectations.

In Minitab, go to Stat > Multivariate > Simple Correspondence Analysis... and enter your data as shown in the dialogue box below. (Also click on "Graphs" and check the box labeled "Symmetric plot showing rows and columns.")

Minitab creates the following plot: 

Looking at the plot, we quickly see that vegetables tend to be associated with “Disagree” (positioned close to each other in the graph) and Ice cream is positioned close to “Neutral” (they are related to each other). As for Meat and Potatoes, the panel tends either to “Agree” or “Strongly agree.”

We now have a much better understanding of the preferences of our panel, because we know what they tend to like and dislike.

Selecting the Right Type of Tool to Analyze Survey Data

Many multivariate tools are available, so how can you choose the right one to analyze your survey data?

The decision tree below shows which method you might choose according to your objectives and the type of data you have. For example, we selected correspondence analysis in the previous example because all our variables were categorical, or qualitative in nature.

 

Categorical Data and Prediction of Group Membership (Right Branch) 

Clustering
If you have some numerical (or continuous) data and you want to understand how your customers might be grouped / aggregated (from a statistical point of view) into several homogeneous groups, you can use clustering techniques. This could be helpful to define profiles and user groups.

Discriminant Analysis or Logistic Regression (Scoring)
If your individuals already belong to different groups and you want to understand which variables are important to define an existing user group, or predict group membership for new individuals, you can use discriminant analysis, or binary logistic regression (if you only have two groups).

Correspondence Analysis 
As we saw in the first example, correspondence analysis lets us study relationships between variables that are categorical / qualitative.

Numeric or Continuous Data Analysis (Left Branch)

Principal Component Analysis or Factor Analysis
If all your variables are numeric, you can use principal components analysis to understand how variables are related to one another. Factorial analysis may be useful to identify an underlying, unknown factor associated to your variables.

Item Analysis
This tool was specifically created for survey analysis. Do the items of a survey evaluate similar characteristics? Which items differ from the remaining questions  The objective is to assess internal consistency of a survey. 

They are computationally intensive, but performing these multivariate analyses in Minitab is very user-friendly, and the software produces easy-to-understand graphs (as in the food preference example above).

A Closer Look at Some Specific Multivariate Tools

Let's take a closer look at the tools for numerical survey data analysis. The graph below shows the tools that are available to you and their objectives in each case. These methods are often used to group numeric variables according to similarity, they may also be useful in studying how individuals are positioned according the main groups of variables in order to identify user profiles.

 

And now let's look a bit more closely at the tools we can use for analyzing categorical survey data. Again, the diagram below shows the tools that are available to you and their objectives. Many of these tools can be used to study how numeric variable relate to qualitative categories.

Conclusion

This is a very general overview of multivariate tools for survey analysis. If you want to go deeper and learn more about these techniques, you can find some resources on the Minitab web site, in the Help menu in Minitab's statistical software, or you can contact our technical support team


Using Fitness Tracker Data to Make Wise Decisions: Are You Working Out in the Right Zone?

$
0
0
gym

Technology is very much part of our lives nowadays. We use our smartphones to have video calls with our friends and family, and watch our favourite TV shows on tablets. Technology has also transformed the fitness industry with the increasing popularity of fitness trackers.

Recently, I got myself a fitness watch and it's becoming my favourite gadget. It can track how many steps I’ve taken, my heart rate during a workout, and how many calories I've burned during my workout and over the whole day. Based on the calories burned, I can adjust my diet to ensure I have eaten what I require for the day. I’ve been collecting data from my weekly Zumba sessions, gym workouts and lunch-time walks. After collecting data for over a month, I decided to do some analysis with it using Minitab. Below is a snapshot of the data I collected in Minitab.

fitbit data

For each activity, I have the following information:

  • Duration of exercise in minutes and seconds
  • Time spent (rounded to nearest minutes) on peak/high-intensity exercise heart-rate zone—heart rate greater than 85% of maximum
  • Time spent (rounded to nearest minutes) on cardio/medium-to-high-intensity exercise heart-rate zone—heart rate is 70 to 84% of maximum
  • Time spent (rounded to nearest minutes) on fat-burn/low-to-medium-intensity exercise heart-rate zone—heart rate is 50-69% of maximum
  • Average heart rate during the session
  • Total calories burned during the session

It appears that the higher average heart rate results in more calories burned. Also, this depends on time spent at different heart rate zones. Let’s do some calculation using correlation coefficients.

Correlation - Cardio and Calories

As expected, all three variables are positively correlated with calories burned. However, spending hours on the treadmill is probably not a very good way to burn calories. With the best summer weather just around the corner, I need a more efficient way to exercise to lose the few pounds from my indulgence in the winter months!

According to research, exercising at higher intensity can result in more calories burned due to the “afterburn” effect. The afterburn effect is the additional calories burned after intensive exercise. Recently, at my local gym, they have introduced 30-minute HIIT (high-intensity interval training) sessions, which I am considering taking. Hence, fitting a regression model using my data will probably help me make the decision.

In Minitab, I opened Stat > Regression > Regression > Fit Regression Model, and completed the dialog and sub-dialog boxes as shown below.

fitbit regression dialog

fitbit regression subdialog

Instead of using a trial-and-error approach to select terms for the model, I will use the stepwise approach to help me identify suitable terms for the model.

fitbit stepwise regression

And after I press OK on each of my dialogs, Minitab returns the regression equation:

Regression Equation for Fitbit Data

fitbit regression model summary

The final model is quite decent, as the three types of R-squared values are all above 80%. This implies I can use this model to make predictions. The regression equation appears complex, but I can use the response optimizer in Minitab 17 to identify optimum settings to achieve my goal.

There is a common belief that 1 pound of fat (0.45 kilogram) is approximately equal to 3500 calories. Let’s say I aim to burn about 300 calories in each session. This means after about 12 sessions I would have lost approximately a pound of fat, provided I also had a healthy diet. Since exercising at higher heart rate tends to burn more calories, I will also aim to maintain an average heart rate between, say, 128 and 148, which for me works out as somewhere between 70-80% of maximum heart rate.

With all the conditions above, using Stat > Regression > Regression > Response Optimizer, here are some screenshots of the dialog boxes.

response optimizer for fitbit

response optimizer options for fitbit data

My target heart rate is 300, and getting above 300 would be a bonus. Hence, I am using 310 as the upper limit.

fitbit upper limit

I would like to spend no more than 45 minutes per session and hence I am using  a maximum of 30 minutes exercising in the cardio zone, and 15 minutes in the fat-burn zone.

Response optimization output for fitbit data

Fitbit optimizer response plot

To achieve my goal, I need to exercise in the cardio zone for about 21 minutes, exercise in the fat burn zone for about 15 minutes, and maintain my average heart rate at about 148 for the session.

I understand that the HIIT sessions involve very intense bursts of exercise followed by short, sometimes active, recovery periods. This type of training gets and keeps your heart rate up. Based on  this, if out of a 30-minute HIIT session I can maintain about 21 minutes in the cardio zone, and spend the rest of the session exercising in the fat-burn zone, I will be close to achieving my goal. I can always supplement this by a few minutes on the exercise bike or cross-trainer after the class. 

Another good feature with the response optimizer is that I can evaluate different settings to see how the changes can affect the response. Let's consider the days when the HIIT class is not offered and I need to use the machines. I normally go for a longer session on the cross trainer (20-30 minutes), followed by a quick 10-minute session on the step machine. From past experience, I can easily get into the cardio heart-rate zone when using the cross-trainer. Now I can use the optimizer to predict the calories burned for 30 minutes of working out in the cardio zone and 10 minutes in the fat-burn zone. I will also use a lower average heart rate of 140.

By clicking on the current setup, I can input new settings.

Fitbit response optimizer new settings

response optimizer for fitbit data cardio heart rate zone

Well, this solution is not too far off from my target of 300 calories burned!

It’s turned out to be an enjoyable and informative experience analysing my own fitness data to see what my best workout options are. Taking the data collected by my fitness tracker and doing further analysis on it has definitely helped me to decide on how to exercise wisely and efficiently.   

 

Gym photo by Indigo Fitness Club Zurich, used under Creative Commons 2.0 license. 

2 Reasons 2 Recode Data and How 2 Do It in Less than 2 Minutes

$
0
0

convert numeric 2 into toIt’s not easy to get data ready for analysis. Sometimes, data that include all the details we want aren’t clean enough for analysis. Even stranger, sometimes the exact opposite can be true: Data that are convenient to collect often don’t include the details that we want when we analyze them.

Let’s say that you’re looking at the documentation for the National Health and Nutrition Examination Survey (NHANES) from 2001-2002. By convention, the data set uses a symbol for missing values, but some variables have additional numeric codes for data that are missing for a specific reason. For example, one data set records hearing measurements (Audiometry). One variable in this data set is the middle ear pressure in the right ear, which has values from -282 to 180, but also includes these codes:

  • 555: Compliance <=0.2
  • 777: Refused
  • 888: Could not obtain

Although in some cases knowing how often each of these situations occurs could be important, to analyze the numeric data, you have to change these code values from numbers to something that won’t be analyzed. After all, leaving in a bunch of values that are more than twice what the maximum should be would have a serious effect on the mean of the data set.

In Minitab, try this:

  1. Choose Data > Recode > To Numeric.
  2. In Recode values in the following columns, enter the variables with the specialized missing values. If you’re following along with the NHANES data, the variable is AUXTMEPR.
  3. In Method, select Recode range of values.
  4. Complete the table with the endpoints and recoded values like this:

Lower endpoint

Upper endpoint

Recoded value

555

556

*

777

778

*

888

889

*

  1. In Endpoints to include, select Lower endpoint only. Click OK.

The resulting column has missing values instead of the coded values. And that means the statistics that you calculate will now have the correct values.

Recoding can let you prepare data with numeric measurements for correct analysis, but the CDC data sets also often use numeric codes to represent categories. For example, one variable records these codes for the status of an audio exam:

  • 1: Complete
  • 2: Partial
  • 3: Not done

Another reason to recode your data before analyzing it is so that both the data itself and the values that subsequently appear as categories and on graphs are descriptive. You can recode these numeric codes to text in a similar fashion. Try this:

  1. Choose Data > Recode > To Text.
  2. In Recode values in the following columns, enter the variables with the numeric codes. If you are following along with the NHANES data, the variable is AUAEXSTS.
  3. In Method, select Recode individual values.
  4. Complete the table with the current values and recoded values like this:

Current value

Recoded value

1

Complete

2

Partial

3

Not done

  1. Click OK.

The resulting column has the text labels instead of the numeric codes. When you create graphs, the labels will be descriptive.

Sometimes, data that are good to collect differ from data that are good to analyze. Sometimes we need more detail in the data that we collect than we need in the data that we analyze, such as when we record the reason that data are missing. Sometimes, we need data that are faster to record than is convenient when we analyze data, so we use abbreviations or codes that aren’t as descriptive as they can be.

Fortunately, Minitab makes it easy for you to balance those needs by making it easy to manipulate your data, with features like recoding. Ready for more? Check out some of the ways that Minitab makes it easy to merge different worksheets together.

How to Identify Outliers (and Get Rid of Them)

$
0
0

an outlier among falcon tubesAn outlier is an observation in a data set that lies a substantial distance from other observations. These unusual observations can have a disproportionate effect on statistical analysis, such as the mean, which can lead to misleading results. Outliers can provide useful information about your data or process, so it's important to investigate them. Of course, you have to find them first. 

Finding outliers in a data set is easy using Minitab Statistical Software, and there are a few ways to go about it.

Finding Outliers in a Graph

If you want to identify them graphically and visualize where your outliers are located compared to rest of your data, you can use Graph > Boxplot.

Boxplot

This boxplot shows a few outliers, each marked with an asterisk. Boxplots are certainly one of the most common ways to visually identify outliers, but there are other graphs, such as scatterplots and individual value plots, to consider as well.

Finding Outliers in a Worksheet

To highlight outliers directly in the worksheet, you can right-click on your column of data and choose Conditional Formatting > Statistical > Outlier. Each outlier in your worksheet will then be highlighted in red, or whatever color you choose.

Conditional Formatting Menu in Minitab

Removing Outliers

If you then want to create a new data set that excludes these outliers, that’s easy to do too. Now I’m not suggesting that removing outliers should be done without thoughtful consideration. After all, they may have a story – perhaps a very important story – to tell. However, for those situations where removing outliers is worthwhile, you can first highlight outliers per the Conditional Formatting steps above, then right-click on the column again and use Subset Worksheet > Exclude Rows with Formatted Cells to create the new data set.

The Math

If you want to know the mathematics used to identify outliers, let's begin by talking about quartiles, which divide a data set into quarters:

  • Q1 (the 1st quartile): 25% of the data are less than or equal to this value
  • Q3 (the 3rd quartile): 25% of the data are greater than or equal to this value
  • IQR (the interquartile range): the distance between Q3 – Q1, it contains the middle 50% of the data

Outliers are then defined as any values that fall outside of:

Q1 – (1.5 * IQR)

or

Q3 + (1.5 * IQR)

Of course, rather than doing this by hand, you can leave the heavy-lifting up to Minitab and instead focus on what your data are telling you.

Don't see these features in your version of Minitab? Choose Help > Check for Updates to see if you're using Minitab 17.3.

Using the Nelson Rules for Control Charts in Minitab

$
0
0

by Matthew Barsalou, guest blogger

Control charts plot your process data to identify and distinguish between common cause and special cause variation. This is important, because identifying the different causes of variation lets you can take action to make improvements in your process without over-controlling it.

When you create a control chart, the software you're using should make it easy to see where you may have variation that requires your attention. For example, Minitab Statistical Software automatically flags any control chart data point that is more than three standard deviations above the centerline, as shown in the I chart below.

I Chart of Data - Nelson Rules
I chart example with one out-of-control point.

A data point that more than three standard deviations from the centerline is one indicator for detecting special-cause variation in a process. There are additional control chart rules introduced by Dr. Lloyd S. Nelson in his April 1984 Journal of Quality Technology column. The eight Nelson Rules are shown below, and if you're interested in using them, they can be activated in Minitab.

Nelson Rules for special cause variation in control charts
The Nelson rules for tests of special causes. Reprinted with permission from Journal of Quality Technology©1984 ASQ, asq.org.

To activate the Nelson rules, go to Control Charts > Variables Charts for Individuals > Individuals... and then click on "I Chart Options." Go to the Tests tab and place a check mark next to the test you would like to select—or simply use the drop-down menu and select “Perform all tests for special causes,” as shown below.

Individual Charts Options in Minitab

The resulting session window explains which tests failed.

session window output

On the chart itself, the data points that failed each test are identified in red as shown below.

I chart of data

Simply activating all of the rules is not recommended—the false positive rate goes up as each additional rule is activated. At some point the control chart will become more sensitive than it needs to be and corrective actions for special causes of variation may be implemented when only common cause is variation present.

Fortunately, Nelson provided detailed guidance on the correct application of his namesake rules. Nelson’s guidance on applying his rules for tests of special causes is presented below.

comments on test for special causes
Comments on tests for special causes. Reprinted with permission from Journal of Quality Technology©1984 ASQ, asq.org.

Nelson’s tenth comment is an especially important one, regardless of which tests have been activated.  

Minitab, together with the Nelson rules, can be very helpful, but neither can replace or remove the need for the analyst's judgment when assessing a control chart. These rules can, however, assist the analyst in making the proper decision. 

 

About the Guest Blogger

Matthew Barsalou is a statistical problem resolution Master Black Belt at BorgWarner Turbo Systems Engineering GmbH. He is a Smarter Solutions certified Lean Six Sigma Master Black Belt, ASQ-certified Six Sigma Black Belt, quality engineer, and quality technician, and a TÜV-certified quality manager, quality management representative, and auditor. He has a bachelor of science in industrial sciences, a master of liberal studies with emphasis in international business, and has a master of science in business administration and engineering from the Wilhelm Büchner Hochschule in Darmstadt, Germany. He is author of the books Root Cause Analysis: A Step-By-Step Guide to Using the Right Tool at the Right TimeStatistics for Six Sigma Black Belts and The ASQ Pocket Guide to Statistics for Six Sigma Black Belts.

Those 10 Simple Rules for Using Statistics? They're Not Just for Research

$
0
0

Earlier this month, PLOS.org published an article titled "Ten Simple Rules for Effective Statistical Practice." The 10 rules are good reading for anyone who draws conclusions and makes decisions based on data, whether you're trying to extend the boundaries of scientific knowledge or make good decisions for your business. 

Carnegie Mellon University's Robert E. Kass and several co-authors devised the rules in response to the increased pressure on scientists and researchers—many, if not most, of whom are not statisticians—to present accurate findings based on sound statistical methods. 

Since the paper, and the discussions it has prompted, focus on scientists and researchers, it seems worthwhile to consider how the rules might apply to quality practitioners or business decision-makers as wellIn this post, I'll share the 10 rules, some with a few modifications to make them more applicable to the wider population of all people who use data to inform their decisions. 

questions1. Statistical Methods Should Enable Data to Answer Scientific Specific Questions

As the article points out, new or infrequent users of statistics tend to emphasize finding the "right" method to use—often focusing on the structure or format of their data, rather than thinking about how the data might answer an important question. But choosing a method based on the data is putting the cart before the horse. Instead, we should start by clearly identifying the question we're trying to answer. Then we can look for a method that uses the data to answer it. If you haven't already collected your data, so much the better—you have the opportunity to identify and obtain the data you'll need.

2. Signals Always Come With Noise

If you're familiar with control charts used in statistical process control (SPC) or the Control phase of a Six Sigma DMAIC project, you know that they let you distinguish process variation that matters (special-cause variation) from normal process variation that doesn't need investigation or correction.

control chart
Control charts are one common tool used to distinguish "noise" from "signal." 

The same concept applies here: whenever we gather and analyze data, some of what we see in the results will be due to inherent variability. Measures of probability for analyses, such as confidence intervals, are important because they help us understand and account for this "noise." 

3. Plan Ahead, Really Ahead

Say you're starting a DMAIC project. Carefully considering and developing good questions right at the start of a project—the DEFINE stage—will help you make sure that you're getting the right data in the MEASURE stage. That, in turn, should result in a much smoother and stress-free ANALYZE phase—and probably more successful IMPROVE and CONTROL phases, too. The alternative? You'll have to complete the ANALYZE phase with the data you have, not the data you wish you had. 

4. Worry About Data Quality

gauge"Can you trust your data?" My Six Sigma instructor asked us that question so many times, it still flashes through my mind every time I open Minitab. That's good, because he was absolutely right: if you can't trust your data, you shouldn't do anything with it. Many people take it for granted that the data they get is precise and accurate, especially when using automated measuring instruments and similar technology. But how do you know they're measuring precisely and accurately? How do you know your instruments are calibrated properly? If you didn't test it, you don't know. And if you don't know, you can't trust your data. Fortunately, with measurement system analysis methods like gage R&R and attribute agreement analysis, we never have to trust data quality to blind faith. 

5. Statistical Analysis Is More Than a Set of Computations

Statistical techniques are often referred to as "tools," and that's a very apt metaphor. A saw, a plane, and a router all cut wood, but they aren't interchangeable—the end product defines which tool is appropriate for a job. Similarly, you might apply ANOVA, regression, or time series analysis to the same data set, but the right tool depends on what you want to understand. To extend the metaphor further, just as we have circular saws, jigsaws, and miter saws for very specific tasks, each family of statistical methods also includes specialized tools designed to handle particular situations. The point is that we select a tool to assist our analysis, not to define it. 

6. Keep it Simple

Many processes are inherently messy. If you've got dozens of input variables and multiple outcomes, analyzing them could require many steps, transformations, and some thorny calculations. Sometimes that degree of complexity is required. But a more complicated analysis isn't always better—in fact, overcomplicating it may make your results less clear and less reliable. It also potenitally makes the analysis more difficult than necessary. You may not need a complex process model that includes 15 factors if you can improve your output by optimizing the three or four most important inputs. If you need to improve a process that includes many inputs, a short screening experiment can  help you identify which factors are most critical, and which are not so important. 

7. Provide Assessments of Variability

No model is perfect. No analysis accounts for all of the observed variation. Every analysis includes a degree of uncertainty. Thus, no statistical finding is 100% certain, and that degree of uncertainty needs to be considered when using statistical results to make decisions. If you're the decision-maker, be sure that you understand the risks of reaching a wrong conclusion based on the analysis at hand. If you're sharing your results with stakeholders and executives, especially if they aren't statistically inclined, make sure you've communicated that degree of risk to them by offering and explaining confidence intervals, margins of error, or other appropriate measures of uncertainty. 

8. Check Your Assumptions

Different statistical methods are based on different assumptions about the data being analyzed. For instance, many common analyses assume that your data follow a normal distribution. You can check most of these assumptions very quickly using functions like a normality test in your statistical software, but it's easy to forget (or ignore) these steps and dive right into your analysis. However, failing to verify those assumptions can yield results that aren't reliable and shouldn't be used to inform decisions, so don't skip that step. If you're not sure about the assumptions for a statistical analysis, Minitab's Assistant menu explains them, and can even flag violations of the assumptions before you draw the wrong conclusion from an errant analysis. 

9. When Possible, Replicate Verify Success!

In science, replication of a study—ideally by another, independent scientist—is crucial. It indicates that the first researcher's findings weren't a fluke, and provides more evidence in support of the given hypothesis. Similarly, when a quality project results in great improvements, we can't take it for granted those benefits are going to be sustained—they need to be verified and confirmed over time. Control charts are probably the most common tool for making sure a project's benefits endure, but depending on the process and the nature of the improvements, hypothesis tests, capability analysis, and other methods also can come into play.  

10. Make Your Analysis Reproducible Share How You Did It

In the original 10 Simple Rules article, the authors suggest scientists share their data and explain how they analyzed it so that others can make sure they get the same results. This idea doesn't translate so neatly to the business world, where your data may be proprietary or private for other reasons. But just as science benefits from transparency, the quality profession benefits when we share as much information as we can about our successes. Of course you can't share your company's secret-sauce formulas with competitors—but if you solved a quality challenge in your organization, chances are your experience could help someone facing a similar problem. If a peer in another organization already solved a problem like the one you're struggling with now, wouldn't you like to see if a similar approach might work for you? Organizations like ASQ and forums like iSixSigma.com help quality practitioners network and share their successes so we can all get better at what we do. And here at Minitab, we love sharing case studies and examples of how people have solved problems using data analysis, too. 

How do you think these rules apply to the world of quality and business decision-making? What are your guidelines when it comes to analyzing data? 

 

Using Marginal Plots, aka "Stuffed-Crust Charts"

$
0
0

In my last post, we took the red pill and dove deep into the unarguably fascinating and uncompromisingly compelling world of the matrix plot. I've stuffed this post with information about a topic of marginal interest...the marginal plot.

Margins are important. Back in my English composition days, I recall that margins were particularly prized for the inverse linear relationship they maintained with the number of words that one had to string together to complete an assignment. Mathematically, that relationship looks something like this:

Bigger margins = fewer words

stuffed crustIn stark contrast to my concept of margins as information-free zones, the marginal plot actually utilizes the margins of a scatterplot to provide timely and important information about your data. Think of the marginal plot as the stuffed-crust pizza of the graph world. Only, instead of extra cheese, you get to bite into extra data. And instead of filling your stomach with carbs and cholesterol, you're filling your brain with data and knowledge. And instead of arriving late and cold because the delivery driver stopped off to canoodle with his girlfriend on his way to your house (even though he's just not sure if the relationship is really working out: she seems distant lately and he's not sure if it's the constant cologne of consumables about him, or the ever-present film of pizza grease on his car seats, on his clothes, in his ears?)

...anyway, unlike a cold, late pizza, marginal plots are always fresh and hot, because you bake them yourself, in Minitab Statistical Software.

I tossed some randomly-generated data around and came up with this half-baked example. Like the pepperonis on a hastily prepared pie, the points on this plot are mostly piled in the middle, with only a few slices venturing to the edges. In fact, some of those points might be outliers. 

Scatterplot of C1 vs C2

If only there were an easy, interesting, and integrated way to assess the data for outliers when we make a scatterplot.

Boxplots are a useful way look for outliers. You could make separate boxplots of each variable, like so:

Boxplot of C1  Boxplot of C2

It's fairly easy to relate the boxplot of C1 to the values plotted on the y-axis of the scatterplot. But it's a little harder to relate the boxplot of C2 to the scatterplot, because the y-axis on the boxplot corresponds to the x-axis on the scatterplot. You can transpose the scales on the boxplot to make the comparison a little easier. Just double-click one of the axes and select Transpose value and category scales:

Boxplot of C2, Transposed

That's a little better. The only thing that would be even better is if you could put each boxplot right up against the scatterplot...if you could stuff the crust of the scatterplot with boxplots, so to speak. Well, guess what? You can! Just choose Graph > Marginal Plot > With Boxplots, enter the variables and click OK

Marginal Plot of C1 vs C2

Not only are the boxplots nestled right up next to the scatterplot, but they also share the same axes as the scatterplot. For example, the outlier (asterisk) on the boxplot of C2 corresponds to the point directly below it on the scatterplot. Looks like that point could be an outlier, so you might want to investigate further. 

Marginal plots can also help alert you to other important complexities in your data. Here's another half-baked example. Unlike our pizza delivery guy's relationship with his girlfriend, it looks like the relationship between the fake response and the fake predictor represented in this scatterplot really is working out: 

Scatterplot of Fake Response vs Fake Predictor 

In fact, if you use Stat > Regression > Fitted Line Plot, the fitted line appears to fit the data nicely. And the regression analysis is highly significant:

Fitted Line_ Fake Response versus Fake Predictor

Regression Analysis: Fake Response versus Fake Predictor The regression equation is Fake Response = 2.151 + 0.7723 Fake Predictor S = 2.12304 R-Sq = 50.3% R-Sq(adj) = 49.7% Analysis of Variance Source DF SS MS F P Regression 1 356.402 356.402 79.07 0.000 Error 78 351.568 4.507 Total 79 707.970

But wait. If you create a marginal plot instead, you can augment your exploration of these data with histograms and/or dotplots, as I have done below. Looks like there's trouble in paradise:

Marginal Plot of Fake Response vs Fake Predictor, with Histograms Marginal Plot of Fake Response vs Fake Predictor, with Dotplots

Like the poorly made pepperoni pizza, the points on our plot are distributed unevenly. There appear to be two clumps of points. The distribution of values for the fake predictor is bimodal: that is, it has two distinct peaks. The distribution of values for the response may also be bimodal.

Why is this important? Because the two clumps of toppings may suggest that you have more than one metaphorical cook in the metaphorical pizza kitchen. For example, it could be that Wendy, who is left handed, started placing the pepperonis carefully on the pie and then got called away, leaving Jimmy, who is right handed, to quickly and carelessly complete the covering of cured meats. In other words, it could be that the two clumps of points represent two very different populations. 

When I tossed and stretched the data for this example, I took random samples from two different populations. I used 40 random observations from a normal distribution with a mean of 8 and a standard deviation of 1.5, and 40 random observations from a normal distribution with a mean of 13 and a standard deviation of 1.75. The two clumps of data are truly from two different populations. To illustrate, I separated the two populations into two different groups in this scatterplot: 

 Scatterplot with Groups

This is a classic conundrum that can occur when you do a regression analysis. The regression line tries to pass through the center of the data. And because there are two clumps of data, the line tries to pass through the center of each clump. This looks like a relationship between the response and the predictor, but it's just an illusion. If you separate the clumps and analyze each population separately, you discover that there is no relationship at all: 

Fitted Line_ Fake Response 1 versus Fake Predictor 1

Regression Analysis: Fake Response 1 versus Fake Predictor 1 The regression equation is Fake Response 1 = 9.067 - 0.1600 Fake Predictor 1 S = 1.64688 R-Sq = 1.5% R-Sq(adj) = 0.0% Analysis of Variance Source DF SS MS F P Regression 1 1.609 1.60881 0.59 0.446 Error 38 103.064 2.71221 Total 39 104.673

Fitted Line_ Fake Response 2 versus Fake Predictor 2

Regression Analysis: Fake Response 2 versus Fake Predictor 2 The regression equation is Fake Response 2 = 12.09 + 0.0532 Fake Predictor 2 S = 1.62074 R-Sq = 0.3% R-Sq(adj) = 0.0% Analysis of Variance Source DF SS MS F P Regression 1 0.291 0.29111 0.11 0.741 Error 38 99.818 2.62679 Total 39 100.109

If only our unfortunate pizza delivery technician could somehow use a marginal plot to help him assess the state of his own relationship. But alas, I don't think a marginal plot is going to help with that particular analysis. Where is that guy anyway? I'm getting hungry. 

Does Major League Baseball Really Need the Second Half of the Season?

$
0
0

MLB LogoWhen you perform a statistical analysis, you want to make sure you collect enough data that your results are reliable. But you also want to avoid wasting time and money collecting more data than you need. So it's important to find an appropriate middle ground when determining your sample size.

Now, technically, the Major League Baseball regular season isn't a statistical analysis. But it does kind of work like one, since the goal of the regular season is to "determine who the best teams are." The National Football League uses a 16-game regular season to determine who the best teams are. Hockey and Basketball use 82 games. 

Baseball uses 162 games.

So is baseball wasting time collecting more data than it needs? Right now the MLB regular season is about halfway over. So could they just end the regular season now? Will playing another 81 games really have a significant effect on the standings? Let's find out.

How much do MLB standings change in the 2nd half of the season?

I went back through five years of records and recorded where each MLB team ranked in their league (American League and National League) on July 8, and then again at the end of the season. We can use this data to look at concordant and discordant pairs. A pair is concordant if the observations are in the same direction. A pair is discordant if the observations are in opposite directions. This will let us compare teams to each other two at a time.

For example, let's compare the Astros and Angels from 2015. On July 8th, the Astros were ranked 2nd in the AL and the Angels were ranked 3rd. At the end of the season, Houston was ranked 5th and the Angles were ranked 6th. This pair is concordant since in both cases the Astros were ranked higher than the Angels. But if you compare the Astros and the Yankees, you'll see the Astros were ranked higher on July 8th, but the Yankees were ranked higher at the end of the season. That pair is discordant.

When we compare every team, we end up with 11,175 pairs. How many of those are concordant? Minitab Statistical Software has the answer.

Measures of Concordance

There are 8,307 concordant pairs, which is just over 74% of the data. So most of the time, if a team is higher in the standings as of July 8th, they will finish higher in the final standings too. We can also use Spearman's rho and Pearson's r to asses the association between standings on July 8th and the final standings. These two values give us a coefficient that can range from -1 to +1. The larger the absolute value, the stronger the relationship between the variables. A value of 0 indicates the absence of a relationship. 

Pearsons r and Spearmans rho

Both values are high and positive, once again indicating that teams ranked higher than other teams on July 8th usually stay that way by the end of the season. So did we do it? Did we show that baseball doesn't really need the 2nd half of their season?

Not quite.

Consider that each league has 15 teams. So a lot of our pairs are comparing teams that aren't that close together, like 1st team to the 15th, the 1st team to the 14th, the 2nd team to the 15th, and so on. It's not very surprising that those pairs are going to be concordant. So let's dig a little deeper and compare each individual team's ranking in July compared to the end of the season. The following histogram shows the difference in a team's rank. Positive values mean the team moved up in the standings, negative values mean they fell.

Histogram

The most common outcome is that a team doesn't move up or down in the standings, as 34 of our observations have a difference of 0. However, there are 150 total observations, so most of the time a team does move up or down. In fact, 55 times a team moved up or down in the standings by 3 or more spots. That's over a third of the time! And there are multiple instances of a team moving 6, 7, or even 8 spots! That doesn't seem to imply that the 2nd half of the season doesn't matter. So what if we narrow the scope of our analysis?

Looking at the Playoff Teams

We previously noted that the regular season is supposed to determine the best teams. So let's focus on the top of the MLB standings. I took the top 5 teams in each league (since the top 5 teams make the playoffs) on July 8th, and recorded whether they were still a top 5 team (and in the playoffs) at the end of the season. The following pie chart shows the results.

Pie Chart

Twenty eight percent of the time, a team that was in the playoffs in July fell far enough in the standings to drop out. So over a quarter of your playoff teams would be different if the season ended around 82 games. That sounds like a significant effect to me. And last, let's return to our concordant and discordant pairs. Except this time, we'll just look at the top half of the standings (top 8 teams). 

Measures of Concordance

This time our percentage of concordant pairs has dropped to 59%, and the values for Spearman's rho and Pearson's r show a weaker association. Teams ranked higher in the 1st half of the season are usually still ranked higher at the end of the season. But there is clearly enough shuffling among the top teams to warrant the 2nd half of the season. So don't worry baseball fans, your regular season will continue to extend to September.

Because, you know, Major League Baseball totally would have shorten the season if this statistical analysis suggested doing so!

And if you're looking to determine the appropriate sample size for your own analysis, Minitab offers a wide variety of power and sample size analyses that can help you out.

 


A Visual Look at Baseball's All-Star Teams

$
0
0

all-star game 2016Last Tuesday Night, Major League Baseball announced the rosters for tomorrow's All-Star game in San Diego. Immediately, as I'm sure was anticipated, people began talking about who made it and who didn't. Who got left out, and who shouldn't have made it.

As a fun little exercise, I decided to take a visual look at the all-star teams, to see what kind of players were selected. I looked at position players only (no pitchers) and made a simple scatterplot, with the x-axis representing their offensive value so far this season, and the y-axis representing their defensive value. This would allow me to see any extreme outliers in terms of value generated so far this year. In Minitab Statistical Software, this command can be found by going to Graph > Scatterplot. I also added data labels through the Editor menu (Editor > Add > Data Labels) so that I could see which point on the plot corresponds to which player.

The plot below shows the American League selections:

scaterrplot

Looking at the graph, some groupings become apparent. The most populated quadrant is the upper right, which represents a high offensive and defensive value. For an all-star team, this makes sense: these are the best of the best. Here is where you'll find names like Mike Trout, Josh Donaldson, and Jose Altuve, the American League leaders in Wins Above Replacement, which is a metric that tries to capture all of a player's value into one nice statistic.

Another grouping that becomes apparent is the upper left quadrant. This is where we see our defensive maestros. To fall in the upper left quadrant, you need to have a high defensive value and a (relatively) low offensive output. We have a shortstop and three catchers here, which makes sense given that those are the two most demanding defensive positions. 

The lower right corner represents players whose value is mostly on offense. Here we, see Edwin Encarnacion, David Ortiz, and Mark Trumbo. Their defensive value is so low because they don't even play defense—they are designated hitters. 

This is a fun way to visualize what kind of all-stars we have, and what they excel at. If the manager needs to make a late-game defensive substitution, this graph can show us where they might lean. Additionally, if they need one pinch hitter for a key at-bat, we can see whom they might lean on by looking at the other end of the graph.

We looked at the American League in detail up above, and I've also created the same plot for the National League below:

nl

*Note: All statistics from fangraphs.com

What Were the Odds of Getting into Willy Wonka's Chocolate Factory?

$
0
0

In the great 1971 movie Willy Wonka and the Chocolate Factory, the reclusive owner of the Wonka Chocolate Factory decides to place golden tickets in five of his famous chocolate bars, and allow the winners of each to visit his factory with a guest. Since restarting production after three years of silence, no one has come in or gone out of the factory. Needless to say, there is enormous interest in finding a golden ticket!

Through a series of news reports we get an understanding that all over the world, kids are desperately purchasing and opening Wonka bars in an attempt to win. But just what were the odds? Unfortunately young Charlie Bucket's teacher is not particularly good at percentages and doesn't offer much help:

I hope I can be at least a little more useful. While the movie only vaguely suggests how many bars were actually being opened, we are provided with two data points. First, the spoiled, bratty, unlikable Veruca Salt's factory-owning father states that he's had his workers open 760,000 Wonka bars just before one of them finds a golden ticket:

Meanwhile the polite, likable Charlie Bucket—who is very poor—has received one Wonka Bar for his birthday and another from his Grandpa Joe. Neither bar was a winner, but Charlie finds some money on the street to buy a third:

In the movie, you can't help but feel that Charlie's odds must have been much, much higher than the nasty Veruca Salt's (or any of the other winners). But is there statistical evidence of that?

In Minitab Statistical Software, I set up a basic 2x2 table like this:

2x2 table

Often when practitioners have a 2x2 table the Chi-Square test immediately comes to mind. but the Chi-Square test is not accurate when any of the cell counts or expected cell counts are small, which is clearly the case here. But we can use Fisher's exact test without such a restriction, which is available in the "Other Stats" subdialog of Stat > Tables > Cross Tabulation and Chi-Square. The output looks like this:

Fishers output

For the Chi-Square portion of the output, Minitab not only refuses to provide a p-value but gives two warnings and a note. The Fisher's exact test can be performed, however, and tests whether the likelihood of a winning tickets was the same for both Charlie and Veruca. The p-value of 0.0000079 confirms what we all knew—karma was working for Charlie and against Veruca!

For fun, let's ignore this evidence that the odds were not equal for each child. Let's pretend that the odds are the same, and a really unlikely thing happened anyway because that's what makes the movie great. Aside from our two data points, we have reports from two children in the classroom that they have opened 100 and 150 bars, respectively, and neither won. So we have two golden tickets among 3 + 760,000 + 100 + 150 = 760,253 Wonka bars. This would be a proportion of 3/760,253 = 0.00000395 or 0.0000395%. Think those odds are low? That represents an inflated estimate! That is because rather than randomly sampling many children, our sample includes two known winners. Selecting four children at random would almost certainly produce four non-winners and the estimate would be 0%.

There is one additional data point that doesn't really make logical sense, but let's use it to come up with a low-end estimate by accepting that it is likely not a real number. At one point, a news reporter indicates that five tickets are hidden among the "countless billions of Wonka bars." Were there actually "countless billions" of unopened Wonka bars in the world? Consider that the most popular chocolate bar in the world—the famous Hershey bar—has annual sales of about 250 million units. And that's per year! It is very, very unlikely that there were countless billions of unopened Wonka bars from that single factory at any one time. Further, that news report is about the contest being announced, so the Wonka factory had not yet delivered the bars with the golden tickets inside. Suffice to say, this is not an accurate number.

But let's suppose that even 1 billion Wonka bars were produced in the run that contained the golden tickets. Then the odds of a single bar containing one would be 5/1,000,000,000 = 0.000000005 or 0.0000005%.

Either way, the chances of finding one were incredibly low...confirming again what grandpa Joe told Charlie:

CHARLIE: "I've got the same chance as anybody else, haven't I?"

GRANDPA JOE: "You've got more, Charlie, because you want it more! Go on, open it!"

DOE Center Points: What They Are & Why They're Useful

$
0
0

Design of Experiments (DOE) is the perfect tool to efficiently determine if key inputs are related to key outputs. Behind the scenes, DOE is simply a regression analysis. What’s not simple, however, is all of the choices you have to make when planning your experiment. What X’s should you test? What ranges should you select for your X’s? How many replicates should you use? Do you need center points? Etc. So let’s talk about center points.

What Are Center Points?

Center points are simply experimental runs where your X’s are set halfway between (i.e., in the center of) the low and high settings. For example, suppose your DOE includes these X’s:

TimeAndTemp

The center point would then be set midway at a Temperature of 150 °C and a Time of 20 seconds.

And your data collection plan in Minitab Statistical Software might look something like this, with the center points shown in blue:

Minitab Worksheet

You can have just 1 center point, or you can collect data at the center point multiple times. This particular design includes 2 experimental runs at the center point. Why pick 2, you may be asking? We’ll talk about that in just a moment.

Why Should You Use Center Points in Your Designed Experiment?

Including center points in a DOE offers many advantages:

1. Is Y versus X linear?

Factorial designs assume there’s a linear relationship between each X and Y. Therefore, if the relationship between any X and Y exhibits curvature, you shouldn’t use a factorial design because the results may mislead you.

So how do you statistically determine if the relationship is linear or not? With center points! If the center point p-value is significant (i.e., less than alpha), then you can conclude that curvature exists and use response surface DOE—such as a central composite design—to analyze your data. While factorial designs can detect curvature, you have to use a response surface design to model (build an equation for) the curvature.

Bad Fit Factorial DesignGood Fit Response Surface Design

And the good news is that curvature often indicates that your X settings are near an optimum Y, and you've discovered insightful results!

2. Did you collect enough data?

If you don’t collect enough data, you aren’t going to detect significant X’s even if they truly exist. One way to increase the number of data points in a DOE is to use replicates. However, replicating an entire DOE can be expensive and time-consuming. For example, if you have 3 X’s and want to replicate the design, then you have to increase the number of experimental runs from 8 to 16!

Fortunately, using replicates is just one way to increase power. An alternative way to increase power is to use center points. By adding just a few center points to your design, you can increase the probability of detecting significant X’s, and estimate the variability (or pure error, statistically speaking).

Learn More about DOE

DOE is a great tool. It tells you a lot about your inputs and outputs and can help you optimize process settings. But it’s only a great tool if you use it the right way. If you want to learn more about DOE, check out our e-learning course Quality Trainer for $30 US. Or, you can participate in a full-day Factorial Designs course at one of our instructor-led training sessions.

Conditional Formatting of Large Residuals and Unusual Combinations of Predictors

$
0
0

If you've used our software, you’re probably used to many of the things you can do in Minitab once you’ve fit a model. For example, after you fit a response to a given model for some predictors with Stat > DOE > Response Surface > Analyze Response Surface Design, you can do the following:

  • Predict the mean value of the response variable for new combinations of settings of the predictors.
  • Draw factorial plots, surface plots, contour plots, and overlaid contour plots.
  • Use the model to find combinations of predictor settings that optimize the predicted mean of the response variable.

In the Response Surface Menu, you can see tools that you can use with a fitte model: Predict, Factorial Plots, Contour Plot, Surface Plot, Overlaid Contour Plot, Response Optimizer

But once your response has that little green check box that says you have a valid model, there’s even more that you can do. For example, you can also use conditional formatting to highlight two kinds of rows:

  • Unusual combinations of predictor values
  • Values of the response variable that the model does not explain well

Want to try it out? You can follow along using this data set about how deep a stream is and how fast the water flows. Open the data set in Minitab, then:

  1. Choose Stat > Regression > Regression > Fit Regression Model.
  2. In Response, enter Flow.
  3. In Continuous Predictors, enter Depth.
  4. In Categorical Predictors, enter Location. Click OK.

Once you’ve clicked OK, the green checkbox will appear in your worksheet to show that you have a valid model.

The green square with a white checkmark shows that the column is a response variable for a current model.

To show unusual combinations of predictors, follow these steps:

  1. Choose Data > Conditional Formatting > Statistical > Unusual X.
  2. In Response, enter Flow. Click OK.

The text and background color for the response value in row 7 changes so that you can see that it’s unusual to have a depth of 0.76 in the first stream.

The value of the response in the row with the unusual X value has red shading and red letters.

You can indicate values that aren’t fit well by the model in a similar fashion.

  1. Choose Data > Conditional Formatting > Statistical > Large Residual.
  2. In Response, enter Flow.
  3. In Style, select Yellow. Click OK.

Now, in the worksheet, the unusual combinations of predictors are red and the values that aren’t fit well by the model are yellow:

The unusual cell with the response value for the row with an unusual X value has a red theme. The cell with the response value for a row that the model does not fit well has a yellow theme.

Not all of the ways that Minitab can conditionally format depend on the model. If you’re ready for more, take a look at the online support center to see examples of these other uses of conditional formats:

Can Regression and Statistical Software Help You Find a Great Deal on a Used Car?

$
0
0

You need to consider many factors when you’re buying a used car. Once you narrow your choice down to a particular car model, you can get a wealth of information about individual cars on the market through the Internet. How do you navigate through it all to find the best deal?  By analyzing the data you have available.  

Let's look at how this works using the Assistant in Minitab 17. With the Assistant, you can use regression analysis to calculate the expected price of a vehicle based on variables such as year, mileage, whether or not the technology package is included, and whether or not a free Carfax report is included.

And it's probably a lot easier than you think. 

A search of a leading Internet auto sales site yielded data about 988 vehicles of a specific make and model. After putting the data into Minitab, we choose Assistant > Regression…

At this point, if you aren’t very comfortable with regression, the Assistant makes it easy to select the right option for your analysis.

A Decision Tree for Selecting the Right Analysis

We want to explore the relationships between the price of the vehicle and four factors, or X variables. Since we have more than one X variable, and since we're not looking to optimize a response, we want to choose Multiple Regression.

This data set includes five columns: mileage, the age of the car in years, whether or not it has a technology package, whether or not it includes a free CARFAX report, and, finally, the price of the car.

We don’t know which of these factors may have significant relationship to the cost of the vehicle, and we don’t know whether there are significant two-way interactions between them, or if there are quadratic (nonlinear) terms we should include—but we don’t need to. Just fill out the dialog box as shown. 

Press OK and the Assistant assesses each potential model and selects the best-fitting one. It also provides a comprehensive set of reports, including a Model Building Report that details how the final model was selected and a Report Card that notifies you to potential problems with the analysis, if there are any.

Interpreting Regression Results in Plain Language

The Summary Report tells us in plain language that there is a significant relationship between the Y and X variables in this analysis, and that the factors in the final model explain 91 percent of the observed variation in price. It confirms that all of the variables we looked at are significant, and that there are significant interactions between them. 

The Model Equations Report contains the final regression models, which can be used to predict the price of a used vehicle. The Assistant provides 2 equations, one for vehicles that include a free CARFAX report, and one for vehicles that do not.

We can tell several interesting things about the price of this vehicle model by reading the equations. First, the average cost for vehicles with a free CARFAX report is about $200 more than the average for vehicles with a paid report ($30,546 vs. $30,354).  This could be because these cars probably have a clean report (if not, the sellers probably wouldn’t provide it for free).

Second, each additional mile added to the car decreases its expected price by roughly 8 cents, while each year added to the cars age decreases the expected price by $2,357.

The technology package adds, on average, $1,105 to the price of vehicles that have a free CARFAX report, but the package adds $2,774 to vehicles with a paid CARFAX report. Perhaps the sellers of these vehicles hope to use the appeal of the technology package to compensate for some other influence on the asking price. 

Residuals versus Fitted Values

While these findings are interesting, our goal is to find the car that offers the best value. In other words, we want to find the car that has the largest difference between the asking price and the expected asking price predicted by the regression analysis.

For that, we can look at the Assistant’s Diagnostic Report. The report presents a chart of Residuals vs. Fitted Values.  If we see obvious patterns in this chart, it can indicate problems with the analysis. In that respect, this chart of Residuals vs. Fitted Values looks fine, but now we’re going to use the chart to identify the best value on the market.

In this analysis, the “Fitted Values” are the prices predicted by the regression model. “Residuals” are what you get when you subtract the actual asking price from the predicted asking price—exactly the information you’re looking for! The Assistant marks large residuals in red, making them very easy to find. And three of those residuals—which appear in light blue above because we’ve selected them—appear to be very far below the asking price predicted by the regression analysis.

Selecting these data points on the graph reveals that these are vehicles whose data appears in rows 357, 359, and 934 of the data sheet. Now we can revisit those vehicles online to see if one of them is the right vehicle to purchase, or if there’s something undesirable that explains the low asking price. 

Sure enough, the records for those vehicles reveal that two of them have severe collision damage.

But the remaining vehicle appears to be in pristine condition, and is several thousand dollars less than the price you’d expect to pay, based on this analysis!

With the power of regression analysis and the Assistant, we’ve found a great used car—at a price you know is a real bargain.

 

One-Sample t-test: Calculating the t-statistic is not really a bear

$
0
0

While some posts in our Minitab blog focus on understanding t-tests and t-distributions this post will focus more simply on how to hand-calculate the t-value for a one-sample t-test (and how to replicate the p-value that Minitab gives us). 

The formulas used in this post are available within Minitab Statistical Software by choosing the following menu path: Help> Methods and Formulas> Basic Statistics> 1-sample t.

The null and three alternative hypotheses for a one-sample t-test are shown below:

The default alternative hypothesis is the last one listed: The true population mean is not equal to the mean of the sample, and this is the option used in this example.

bearTo understand the calculations, we’ll use a sample data set available within Minitab.  The name of the dataset is Bears.MTW, because the calculation is not a huge bear to wrestle (plus who can resist a dataset with that name?).  The path to access the sample data from within Minitab depends on the version of the software. 

For the current version of Minitab, Minitab 17.3.1, the sample data is available by choosing Help> Sample Data.

For previous versions of Minitab, the data set is available by choosing File> Open Worksheet and clicking the Look in Minitab Sample Data folder button at the bottom of the window.

For this example, we will use column C2, titled Age, in the Bears.MTW data set, and we will test the hypothesis that the average age of bears is 40. First, we’ll use Stat> Basic Statistics> 1-sample t to test the hypothesis:

After clicking OK above we see the following results in the session window:

With a high p-value of 0.361, we don’t have enough evidence to conclude that the average age of bears is significantly different from 40. 

Now we’ll see how to calculate the T value above by hand.

The formula for the T value (0.92) shown above is calculated using the following formula in Minitab:

The output from the 1-sample t test above gives us all the information we need to plug the values into our formula:

Sample mean: 43.43

Sample standard deviation: 34.02

Sample size: 83

We also know that our target or hypothesized value for the mean is 40.

Using the numbers above to calculate the t-statistic we see:

t = (43.43-40)/34.02/√83) = 0.918542
(which rounds to 0.92, as shown in Minitab’s 1-sample t-test output)

Now, we could dust off a statistics textbook and use it to compare our calculated t of 0.918542 to the corresponding critical value in a t-table, but that seems like a pretty big bear to wrestle when we can easily get the p-value from Minitab instead.  To do that, I’ve used Graph> Probability Distribution Plot> View Probability:

In the dialog above, we’re using the t distribution with 82 degrees of freedom (we had an N = 83, so the degrees of freedom for a 1-sample t-test is N-1).  Next, I’ve selected the Shaded Area tab:

In the dialog box above, we’re defining the shaded area by the X value (the calculated t-statistic), and I’ve typed in the t-value we calculated in the X value field. This was a 2-tailed test, so I’ve selected Both Tails in the dialog above.

After clicking OK in the window above, we see:

We add together the probabilities from both tails, 0.1805 + 0.1805 and that equals 0.361 – the same p-value that Minitab gave us for the 1-sample t test. 

That wasn’t so bad—not a difficult bear to wrestle at all!

Model Fit: Don't be Blinded by Numerical Fundamentalism

$
0
0

Statistics is all about modelling. But that doesn’t mean strutting down the catwalk with a pouty expression. 

It means we’re often looking for a mathematical form that best describes relationships between variables in a population, which we can then use to estimate or predict data values, based on known probability distributions.

To aid in the search and selection of a “top model,” we often utilize calculated indices for model fit.

In a time series trend analysis, for example, mean absolute percentage error (MAPE) is used to compare the fit of different time series models. Smaller values of MAPE indicate a better fit.

You can see that in the following two trend analysis plots:

low Mape

high MAPE

The MAPE value is much lower in top plot for Model A (9.37) than it is for the bottom plot with Model B (24.84). So Model A fits its data better than Model B fits its dat—ah…er, wait…that doesn’t seem right.. I mean… Model B looks like a closer fit, doesn’t it…hmmm…do I have it backwards…what the...???

Step back from the numbers!

Statistical indices for model fit can be great tools, but they work best when interpreted using a broad, flexible attitude, rather than a narrow, dogmatic approach. Here are a few tips to make sure you're getting the big picture: 

  • Look at your data

No, don't just look. Gaze lovingly. Stare rudely. Peer penetratingly. Because it's too easy to get carried away by calculated stats. If you graphically examine your data carefully, you can make sure that what you see, on the graph, is what you get, with the statistics. Looking at the data for these two trend models, you know the MAPE value isn’t telling the whole story.

  • Understand the metric

MAPE measures the absolute percentage error in the model. To do that, it divides the absolute error of the model by the actual data values. Why is that important to know? If there are data values close to 0, dividing by those very small fractional values greatly inflates the value of MAPE.

That’s what’s going on in Model B. To see this, look what happens when you add 200 to each value in the data set for Model B—and fit the same model. 

MAPE lowest

Same trend, same fit, but now the absolute percentage of error is more than 25 times lower (0.94611) than it was with the data that included values close 0—and more than 10 times lower than the MAPE value in Model A. That result makes more sense, and is coherent with the model fit shown on the graph.

  • Examine multiple measures

MAPE is often considered the go-to measurement for the fit of time series models. But notice that there are two other measures of model error in the trend plots: MAD (mean absolute deviation) and MSD (mean squared deviation). Notice that in both trend plots for Model B, those values are lowand identical. They’re not affected by values close to 0. 

Examining multiple measures helps ensure you won't be hoodwinked by a quirk for a single measure.

  • Interpret within the context

Generally you’re safest using measures of fit to compare the fits of candidate models for a single data set. Comparing model fits across different data sets, in different contexts, leads to invalid comparisons. That’s why you should be wary of blanket generalizations (and you’ll hear them), such as “every regression model should have an R-squared of at least 70%.” It really depends what you’re modelling, and what you’re using the model for. For more on that, read this post by Jim Frost on R-squared.

Finally, a good model is more than just a perfect fit

Don't let small numerical differences in model fit be your be-all and end-all. There are other important practical considerations, as shown by these models.

simplecomplex

 

 

 

 

 

All About Run Charts

$
0
0

I blogged a few months back about three different Minitab tools you can use to examine your data over time. Did you know you that you can also use a simple run chart to display how your process data changes over time? Of course those “changes” could be evidence of special-cause variation, which a run chart can help you see.

What’s special-cause variation, and how’s it different from common-cause variation?

You know that variation occurs in all processes, and common-cause is just that—a natural part of any process. Special-cause variation comes from outside the system and causes recognizable patterns, shifts, or trends in the data. A run chart shows graphically whether special causes are affecting your process.

A process is in control when special causes of variation have been eliminated.

How can I create a run chart in Minitab?

It’s easy! Follow along with this example:

Suppose you want to be sure the widgets your company makes are within the correct weight specifications requested by your customer. You’ve collected a data set that contains weight measurements from the injection molding process used to create the widgets (Open the worksheet WEIGHT.MTW that’s included with Minitab’s sample data sets—in Minitab 17.2, open Help > Sample Data).

To evaluate the variation in weight measurements, you create a run chart in Minitab:

  1. Choose Stat > Quality Tools > Run Chart
  2. In Single column, enter Weight
  3. In Subgroup size, enter 1. Click Ok.

Here’s what Minitab creates for you:

Minitab Run Chart

*Note that Minitab plots the value of each data point in the order that they were collected and draws a horizontal reference line at the median.

What does my run chart tell me about my data?

You can examine the run chart to see if there are any obvious patterns, but Minitab includes two tests for randomness that provide information about non-random variation due to trends, oscillation, mixtures, and clustering in your data. Such patterns indicate that the variation observed is due to special-cause variation.

In the example above, because the approximate p-values for clustering, mixtures, trends, and oscillation are all greater than the significance level of 0.05, there’s no indication of special-cause variation or non-randomness. The data appear to be randomly distributed with no temporal patterns, but to be certain, you should examine the tests for runs about the median and runs up or down. However, it looks as if the variation in widget weights will be acceptable to your customer.

Tell me more about these nonrandom patterns that can be identified by a run chart …

There are four basic patterns of nonrandomness that a run chart will detect—mixture, cluster, oscillating, and trend patterns.

A mixture is characterized by an absence of points near the center line:

http://support.minitab.com/en-us/minitab/17/runchart_mixture.png

Clusters are groups of points in one area of the chart:

http://support.minitab.com/en-us/minitab/17/runchart_cluster.png

Oscillation occurs when the data fluctuates up and down:

http://support.minitab.com/en-us/minitab/17/runchart_oscillation.png

A trend is a sustained drift in the data, either up or down:

http://support.minitab.com/en-us/minitab/17/runchart_trend.png

To learn more about what these patterns can tell you about your data, visit run chart basics on Minitab 17 Support. 

Have You Accidentally Done Statistics?

$
0
0

Have you ever accidentally done statistics? Not all of us can (or would want to) be “stat nerds,” but the word “statistics” shouldn’t be scary. In fact, we all analyze things that happen to us every day. Sometimes we don’t realize that we are compiling data and analyzing it, but that’s exactly what we are doing. Yes, there are advanced statistical concepts that can be difficult to understand—but there are many concepts that we use every day that we don’t realize are statistics.

I consider myself a student of baseball, so my example of unknowingly performing statistical procedures concerns my own experiences playing that game.

My baseball career ended as a 5’7” college freshman walk-on. When I realized that my ceiling as a catcher was a lot lower than my 6’0”-6’5” teammates I hung up my spikes. As an adult, while finishing my degree in Business Statistics, I had the opportunity to shadow a couple of scouts from the Major League Baseball Scouting Bureau. Yes, I’ve seen Moneyball and I know that traditional scouting methods are reputed to conflict with the methods of stat nerds like myself, but as a former player I wanted to see what these scouts were looking at. 

baseball statisticsMy first day with the scouts I found out they were traditional baseball guys. They didn’t believe data could tell how good a player is better than observation could, and ultimately they didn't think statistics were important to what they do.  

I found their thinking to be a little off, and a little funny. Although they didn’t believe in statistics, the tools they use for their jobs actually quantify a player's attributes. I watched as they used a radar gun to measure pitch speed, a stopwatch to measure running speed, and a notepad to record their measurements (they didn’t realize they were compiling data). As one of the scouts was conversing with me, asking how statistics are going to be brought into baseball, he was making a dot plot by hand of the pitcher's pitches by speed to find the velocity distribution of the pitcher.

After I explained to him that was unknowing creating a dot plot (like the one I created for Rasiel Iglesias using Minitab, and which has a bimodal distribution) we started talking about grading players’ skills. The scouts would grade how players hit, their power, how they run, arm strength, and fielding ability. They used a numeric grading system from 20-80 for each of the characteristics, with 20 being the lowest, 50 being average, and 80 being elite. After they compiled this data they would give the players grades through analysis, and they would create a report with these grades to convey to others what they saw in the player.

I was amazed at how these scouts—true, old-school baseball guys who said stats weren’t important for their jobs—were compiling data and analyzing it for their reports. 

A few of the other statistical ideas the scouts were (accidentally) concerned about included the sample size of observations of a player, comparison analysis, and predicting a where a player falls within their physical development (regression).

Like the baseball scouts, many of us are unwittingly doing statistics. Just like these scouts, we run into data all day long without recognizing that we can compile and analyze it. In work we worry about customer satisfaction, wait time, average transaction value, cost ratios, efficiency, etc. And while many people get intimidated when we use the word "statistics," we don’t need advanced degrees to embrace observing, compiling data, and making solid decisions based on our analysis.

So, are you accidentally doing statistics? If you are wanting to get beyond accidentally doing statistics and analyze a little more deliberately, Minitab has many tools like the Assistant menu, and Stat Guide to help you on your stats journey.

Analyzing the Jaywalking Habits of New England Wildlife

$
0
0

My recent beach vacation began with the kind of unfortunate incident that we all dread: killing a distant relative.  

It was about 3 a.m. Me, my two sons, and our dog had been on the road since about 7 p.m. the previous day to get to our beach house on Plum Island, Massachusetts. Google maps said our exit was coming up and that we were only about 15 minutes away from our palace. Buoyed by that projection, I sat a little taller in my seat.

Is that the salty sea air filling my nostrils? I thought to myself. Is that a refreshing ocean breeze cooling the air?

And then:

Is that a f—thumpity bump bump bumpox that just disappeared under my car?!fox

"I think that was a fox, dad." My son answered my question before I could ask it.

"That's what I thought, too. Darn. Kind of ironic. And not in a good way."

"Yeah, way to go, dad," my other son added. 

Everyone's a critic.

The irony is that my last name is Fox. And I've always kind of identified with the handsome, intelligent, and resourceful creatures. I couldn't feel too bad though; there was nothing I could have done about it. The poor critter had been sprinting across the highway. No sooner had its small frame popped into the glow of my headlamps, then it had disappeared into the empty void under our feet. Oh well, at least for him it was over quickly. And at least we were almost to the beach. 

Before I could ponder this potential omen too long, we came to our exit. There, in the middle of the exit ramp, were the 2-dimensional remains of what looked to have been, in life, another fox. Apparently, we were traveling through an area of dense foxes. By which I mean that there was a high density of foxes in the area, not that the foxes in the area were highly dense. Although, truth be told, I was feeling a little dense myself at the time. Did I mention that it was now 3 a.m.? 

We continued onto a back-country road that Google maps promised would lead us to our beach house. Is that a marsh off the left just up ahead? I thought to myself. Are those sea grasses waving in the gentle breeze?

Is that the oil light glowing on my console?

"Oh crap."

Stepping out of the car, I noticed the smells of sea air and motor oil mingling with the scent of the forest. I had hoped that the warning light was just an electrical glitch. However, a casual inspection confirmed that the oil that should be inside the engine, had been working it's way outside of the engine, where it is considerably less effective. I was reminded of the words of a noted transportation engineer, "If I push 'er any further cap'n, the engine's gonna blow!"

This sentiment was echoed by the tow truck driver as well. As he descended from his cab to assess the scene, he exclaimed "You left quite a trail. Can't be much oil left in that engine." I told him what had happened. He scratched his chin and asked, "Did  you say a fox? That's funny because I towed a customer last week who hit a fox in a rental car. Busted the oil filter. What do you know?" 

As we stood by the side of the road waiting for our taxi, dawn's first light broke slowly over the marsh, the birds began singing to greet the new day, and the mosquitoes worked persistently to move sizable quantities of blood from inside of our bodies to outside of our bodies. Where it is considerably less effective. Even so, it was kind of a nice moment. Moved by a surprising sense of peace, I turned to my sons.

"I think I know what this all means. I think that perhaps my spirit animal appeared in physical form to test me. To remind me that—to a large extent—happiness is a choice. And if I allow circumstance to rob me of my happiness, that, too, is a choice."

"Spirit animal, huh?" As he spoke I could actually hear my son's eyes rolling back in his head.

My other son chimed in, "If he wasn't a spirit before, he is now."

Everyone's a critic. 

The rest of our vacation went swimmingly. (Pun intended.) In the end, the momentary hassle and added expense of the incident didn't detract at all from our enjoyment of the trip. However, I was curious about the confluence of jaywalking wildlife, so I started doing a little research and learned that some states are actively collecting data on such accidents. I found that Massachusetts has a web page where you can report animal collisions, so I contributed my data for the cause.

I also found out that California and Maine actually enlist and train "citizen scientists" to peruse roadways in a coordinated effort to determine where animals are most frequently hit, and what kinds of animals are hit in each location. This is important data, because animal crossings represent a significant hazard to motorists and wildlife alike. Knowing what kinds of animals are frequently hit in different locations can help authorities focus efforts to introduce culverts, bridges, and other means of safe passage for critters so they can get where they need to go safely, without venturing onto the black top.

You can read the details of a three-year Maine study and explore an interactive map on the Maine Audubon web site. I thought it might be interesting to create a few graphs in Minitab Statistical Software to bring the roadkill data to life, so to speak. (Pun intended. Ill-advised, perhaps, but intended.) 

The first thing I noticed was that collisions with foxes are definitely not that unusual. The following bar chart shows the number of each species found during the data collection. 

Bar chart of counts by species

The web site also gives data for whether the animals found during data collection were alive or dead. As this stacked bar chart makes clear, animals with wings fare much better than earthbound critters when they encounter an automobile.

Stacked bar chart by animal group

The same trend is clear in this pie chart. The red slice in each pie shows the proportion of animals that survived the encounter. For birds, the red slice is much bigger than the blue slice.  

Pie chart of dead vs. live by group

Next time I encounter a spirit animal, or any animal on the road, I hope it has wings. 

Analyzing the History of Olympic Events with Time Series

$
0
0

OlympicsThe Olympic games are about to begin in Rio de Janeiro. Over the next 16 days, more than 11,000 athletes from 206 countries will be competing in 306 different events. That's the most events ever in any Olympic games. It's almost twice as many events as there were 50 years ago, and exactly three times as many as there were 100 years ago.

Since the number of Olympic events has changed over time, this makes it a great data set for a time series analysis.

A time series is a sequence of observations over regularly spaced intervals of time. The first step when analyzing time series data is to create a time series plot to look for trends and seasonality. A trend is a long-term tendency of a series to increase or decrease. Seasonality is the periodic fluctuation in the time series within a certain period—for example, sales for a store might increase every year in November and December. Here is a time series plot of the number of Olympic events since 1896.

Time Series Plot

There is clearly an upward trend, but no seasonal pattern. The data is also a little choppy at the beginning. Part of the explanation is that the data points are not evenly spaced. Most Olympic games are 4 years apart, but a few of them are just 2 years apart, and during World War I and World War II there were 8-year and 12-year gaps, respectively. Since time series data should be evenly spaced over time, we'll only look at data from 1948 on, when the Olympics started being held every 4 years without any interruptions.

Time Series Plot

Now that we have an evenly spaced series that clearly exhibits a trend, we can use a  trend analysis in Minitab Statistical Software to model the data. With a trend analysis, you can use four different types of models: linear, quadratic, exponential growth, and s-curve. We'll analyze our data using both the linear and s-curve models. An additional time series analysis you can use when your data exhibit a trend is double exponential smoothing, so we'll use that method too. 

Trend Analysis

Trend Analysis

Double Exponential Smoothing

You can use the accuracy measures (MAPE, MAD, MSD) to compare the fits of different time series models. For all three of these statistics, smaller values usually indicate a better-fitting model. If a single model does not have the lowest values for all three statistics, MAPE is usually the preferred measurement.

For the time series of olympic event data, the s-curve model has the lowest values of MAPE and MAD, while the double exponential smoothing method has the lowest value for MSD. Based on the "MAPE breaks all ties" guideline, it appears that the s-curve model is the one we want to use.

However, accuracy measures shouldn't be the sole criteria you use to select a model. It's also important to examine the fit of the model, especially at the end of the series. And if the last 5 Olympics are any indication, it appears that the trend of adding large quantities of events to the Olympic Games is coming to an end. In the last 16 years, only 6 events have been added.

The double exponential smoothing model appears to have adjusted for this change, whereas the two trend analysis models have not. Given this additional consideration, the double exponential smoothing model is the one we should pick, especially if we want to use it to forecast future observations.

And now that we've settled on a model, we can sit back, relax, and watch all 918 medals be won. Let the games begin!

 

Correlation: What It Shows You (and What It Doesn't)

$
0
0

Often, when we start analyzing new data, one of the very first things we look at is whether certain pairs of variables are correlated. Correlation can tell if two variables have a linear relationship, and the strength of that relationship. This makes sense as a starting point, since we're usually looking for relationships and correlation is an easy way to get a quick handle on the data set we're working with. 

sharks and ice creamWe'll talk about the correlation between these two factors later in the post.
What Is Correlation?

How do we define correlationWe can think of it in terms of a simple question: when X increases, what does Y tend to do? In general, if Y tends to increase along with X, there's a positive relationship. If Y decreases as X increases, that's a negative relationship. 

Correlation is defined numerically by a correlation coefficient. This is a value that takes a range from -1 to 1. A coefficient of -1 is perfect negative linear correlation: a straight line trending downward. A +1 coefficient is, conversely, perfect positive linear correlation. A correlation of 0 is no linear correlation at all.

Making a scatterplot in Minitab can give you a quick visualization of the correlation between variables, and you can get the correlation coefficient by going to Stat > Basic Statistics > Correlation... Here's a few examples of data sets that a correlation coefficient can accurately assess. 

+ corr

This graph shows a positive correlation of 0.7; close to 1. As you can see from the scatterplot, it's a fairly strong linear relationship. As the values of X tend to increase, Y tends to increase as well. Below is a similar plot, but here the relationship shows a negative direction.

nega

Correlation's Limits

However, there are some drawbacks and limitations to simple linear correlation. A correlation coefficient can only tell whether your two variables have a linear relationship. Take, for example, the following chart, which has a correlation coefficient of about 0; we can pretty easily see that there isn't much of a relationship at all:

norel

However, now take a look at this graph, in which there is an obvious relationship, but not a linear one. Notice that the correlation coefficient is also 0 in this case:

nonl

This is what you have to keep in mind when interpreting correlations. The correlation coefficient will only detect linear relationships. Just because the correlation coefficient is near 0, it doesn't mean that there isn't some type of relationship there. 

The other thing to remember is something most of us hear soon after we begin exploring data—that correlation does not imply causation. Just because X and Y are correlated in some way does not mean that X causes a change in Y, or vice versa.

Here's my favorite example for this. If we look at two variables, shark attacks and ice cream sales, we know intuitively that there's no way one variable has a cause-and-effect impact on the other. However, both shark attacks and ice cream sales will have greater numbers in summer months, so they will be strongly correlated with each other. Be careful not to fall into this trap with your data!

Correlation has a lot of benefits, and it is still a good starting point in a number of different cases, but it's important to know its limitations as well. 

Viewing all 143 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>