# Fletching for science project



## Sanford (Jan 26, 2009)

I did watch a young man at our range with a similar project for his school assignment. He recorded his results over several sessions, recording impact points on an NFAA blue face at 10 yards and repeating each night. I'm sure there was much randomness there was to his results, but, by chance, his arrow with the least variance was the flu-flu


----------



## Warbow (Apr 18, 2006)

Hook'em79 said:


> My son, a 14-year-old JOAD is proposing a fletching experiment as a science project for school (eighth grade). His initial proposal is to fletch 6 of his Easton ACC arrows with different types of fletching (1- spinwings, 2- Kurlys, 3 – ambo, 4 – vanes, 5 – flonite, 6 – feathers) and record their target impact positions and scores over say 4 rounds of 36 arrows at 30 meters. He will attempt to keep the size, offset, and location as close as possible. Can anyone add some perspective, helpful comments, and additional ideas? Any references for similar tests in the recent past? Thank you in advance for your constructive comments.


So, I'd say your first step is to state what your hypothesis is so you can design a good experimental methodology to test that exact hypothesis and reduce as many confounding variables as possible. He should probably run his experimental design past his teacher before he does the tests, but I suppose that depends on how the teacher has set up the assignment.

What are you testing? Science is about studying the natural world to understand it. The best science describes the world with predictive power. With enough data we can use the science of ballistics to calculate where a bullet will travel without even shooting a gun. So, does your son have a hypothesis he's testing? Such as "Spinwings are more accurate than feathers" or "Kurlys have less drag than vanes"

Science tests through "falsification". You come up with a hypothesis and then you run a test to see if the hypothesis is true or not, whether the test results falsify the hypothesis. It is useful to have a hypothesis before you run a test because the nature of the hypothesis will affect how you set up the test. If the test is about accuracy, what variables other than the type of fletchings can affect accuracy? If you shoot a round of feathers and a round of Spinwings and the spinwings score higher, were they more accurate? How do you objectively measure groups for accuracy if they are off center? How do you know if the Spingwings are actually more accurate or if the particular feather fletchings you used were glued on inconsistently, that is how do you know you are actually testing the accuracy of Spinwings rather than your proficiency at fletching? Or if your arrows are actually spined and grain weighted inconsistently? Did you test all the shafts for grouping as bareshafts on a Hooter Shooter First? Do you test groups of arrows? Or just one arrow against another for accuracy? Etc. There's lots to think about before you do your experiment. What was the FPS of each arrow shot? How much do the bareshafts, fletchings, and complete arrows weigh? Recording scores isn't enough, you need to provide details about the experiment that will allow others to examine your underlying data.


For good science you need try to make the thing you are testing the only variable. So the science would be better if your son could find someone who'll let him use their Hooter Shooter (the Archery equivalent of a Ransom Rest). Many archery shops have one and might be willing to help out on a science project. A shooting machine eliminates the human variable, so you know the difference is the fetching and not the person shooting--which makes it a lot easier to make conclusions from the results. But, the idea of shooting a number of full 36 arrow rounds to get more data points is a good one should a shooting machine not be available, though it means you'll need to test for coarser differences because small variations won't necessarily be due to the arrows but could still be due to the archer. Be careful not to over state the significance of results that are slightly different, especially when human variability is involved. 

I'd say that there is no need to test so many different variations for a science project--it is laudable and ambitious and a good project to do (and to learn the answers for JOAD  ), but also more than is needed for a good science project. There are already a lot of variables for him to try to control for even before you add the experimental aspects of the types of fetching, so I think it would be good to scale down the testing . You could also, BTW, test bareshafts. While that might seem like it is adding to the things to be tested it is actually a way of testing the baseline accuracy of your shafts.

I hope we'll get to hear the results.


----------



## archerymom2 (Mar 28, 2008)

My son did a science fair project a few years ago. He used various weights to look at the relationship between momentum, speed, and weight. It was lots of fun! Let me know if you want more info -- he may still have the writeup.


----------



## Hook'em79 (Nov 20, 2009)

Thank you Warbow, that will give him some good points to consider and help narrow things down a bit!


----------



## Warbow (Apr 18, 2006)

Hook'em79 said:


> Thank you Warbow, that will give him some good points to consider and help narrow things down a bit!


Maybe too much. I'm a bit wordy at times :embara: As to the science project, I think keeping it simple will help, as will testing a prediction within the limits of what he can reliability measure.


----------



## Sanford (Jan 26, 2009)

You may have already seen this testing, but if not, there might be some helpful hints in it you could incorporate: http://archeryreport.com/2009/10/fletching-review-speed-drop/


----------



## Warbow (Apr 18, 2006)

Nice find Sanford.

I love it when people do science at home. 

(Nice that Lancaster donated supplies.)


----------



## Hook'em79 (Nov 20, 2009)

Thank you for the comments and insight!


----------



## Matt Z (Jul 22, 2003)

How are the arrows being launched? I would assume a compound with mechanical release would provide the most consistency (stating the obvious, I know).


----------



## Flehrad (Oct 27, 2009)

I would suggest having look at this research paper. The link goes to the pdf file, you'll have to trust me that its not something suspicious.
http://www.mediafire.com/?l8pcoonkshaa0ll


----------



## Warbow (Apr 18, 2006)

Flehrad said:


> I would suggest having look at this research paper. The link goes to the pdf file, you'll have to trust me that its not something suspicious.
> http://www.mediafire.com/?l8pcoonkshaa0ll


Nice find. Interesting paper, quite on point--also kind of awful and a bad example of how to write a paper. Part of the problem is that it appears to be a truncated conference paper rather than a paper published in a peer reviewed journal (peer review is a higher bar) so lots and lots of critical details are missing from the paper, like what kind of material the strings were made out of (since they compared 20 strings (20 yarn strand strings?) to 12 strings (12 strand strings?) and how many trials they made--basic stuff.



> Proceedings of the Eighth International Conference on Machine Learning and Cybernetics, Baoding, 12-15 July 2009
> PERFORMANCE ANALYSIS BASED ON AN EMULATED ARCHERY
> MACHINE
> KUO-BIN LIN1, KUN-SHU HUANG2, CHI-KUANG HWANG2 , CHIA-WEN WU 3


First on they wax on about how fabulous their archery "robot" is (a precision computer controlled motor to draw the recurve bow back with a compound bow release), in spite of the fact that the robot is actually just a convenient way to draw the bow back to a precise disstance--something that can be done with a crank and a set stop. They do this because they are submitting their paper to a conference on "Machine Learning and Cybernetics," so they have to have a robot tie in even though the robot is somewhat superfluous to what they are testing.

Next, they make claims for effects they didn't even test for:



> or the spin-wing vanes, the outside curved vane
> design allows smooth air flow and reduce drag rotation, and
> the inside curved pockets trap and compress air for high
> spin-gyro stability. The curved vanes can allow less side
> ...


They didn't simulate a side wind in their tests, nor does their data include measurements of grouping with or without a side wind. And neither did they use a release that simulates fingers release, let alone a "bad" fingers release. So all of those claims are unsupported, yet stated as if they are proven by the experiments.

They also make claims that seem to be about the strand count in archery strings, but they don't give details on the strings, on the serving, on the mass of the string, on the material. Yet in their conclusion they state:



> For all four different vanes, we have observed that the
> decrease of the number of strings can improve the average
> arrow speed and the stability.


Really? Lets take a look:

A bareshaft of unstated mass launched with "20 strings" (strands?) flew an average 215.65 fps with a standard deviation of 0.1968. With "12 Strings" it flew an average 215.93 fps with a standard deviation of 0.1155. They don't even say how many trials they performed--one hopes that being fancy science guys they calculated the number of trials needed for their claims to be statistically significant, yet, curiously, they don't mention any statistical analysis. Is .28 fps really statistically significant given the two ranges of deviation? Even if it is so, is it clinically significant? Does it make a difference in accuracy for fingers shooter? Is anyone other than a flight shooter going to go to a 12 strand string to get a quarter of an fps? And how do they quantify "stability," exactly? 

Oh, and what, BTW, is the repeatable accuracy of the Easton Chronograph? They list hundredths of an fps averages, ten thousandths of an fps standard deviations, but should they? Many devices read in digits that are beyond their repeatable accuracy. Also, they don't mention how they calibrated their chronograph. So even if the deltas are accurate, it is pretty likely that the absolute measurements are not accurate to one one hundredth of an fps--not even with averaging.

While the paper is intriguing, in many ways it seems like a good example of how not to do a paper.


----------



## Flehrad (Oct 27, 2009)

But, it provides a framework for the proposed experiment as the OP wanted


----------



## DK Lieu (Apr 6, 2011)

I pretty much agree with Warbow on this. The paper is not from an archival journal, but rather from a conference proceedings. Even so, I'm pretty surprised it was published at all, especially for that particular type of conference. Maybe they were hard up for papers, or something. Well, to be honest, I've had some pretty weak papers published myself. Anyway, the testing described in the paper was nothing that could not have been done with a good archer, a bow, and a release aid. About the only conclusion that can be reached is that some vanes have slightly more drag than others. There is no data or evidence that one type of vane creates better arrow grouping than another. Speed is good, but an often overlooked, more elusive quality is "forgiveness". With recurve archery, the largest source of error is going to be the archer. In this sense, equipment that is less sensitive to errors made by the archer will be more desirable. The way to test for this is *not* by using a test machine that shoots the same way every time, such as used in the paper. It is done rather by inserting known, repeatable errors into the shot, and then seeing how the equipment reacts to those errors. At the W&W research facility in Korea, I was shown a shooting machine with mechanical "fingers" that simulated the side force created when the string goes around real fingers. The machine in this form created repeatable shots. If I were to design an experiment to evaluate the forgiveness of certain types of vanes (or shafts, or limbs, ... ) I would build a similar set-up, but I would have a provision to add repeatable error to the shot. This can be done, for example, by adding a variety of small known masses to the mechanical fingers to increase their inertia (so they deflect the string more, or less), and then seeing what happens to arrow flight.


----------



## Stash (Jun 1, 2002)

The original idea is good, but only if his sample size is large enough. Have him look into basic statistics theory, and shoot enough arrows to make sure that his conclusions are statistically significant.

I used to know this stuff, but it's been nearly 40 years since I took the course...


----------



## Warbow (Apr 18, 2006)

Stash said:


> The original idea is good, but only if his sample size is large enough. Have him look into basic statistics theory, and shoot enough arrows to make sure that his conclusions are statistically significant.
> 
> I used to know this stuff, but it's been nearly 40 years since I took the course...


I think that is one area where the conference paper makes itself useful. It shows the difference between the Tight Flight vanes and spin wing vanes to be 2fps at most (could be due to mass since they don't list the mass, and because they measured the speed a mere meter out, before there is much time for drag to have an effect). IIRC elite recurve fingers shooters have a variability of 1-2+ fps even with a clicker. So the difference between Tight Flight and spin wings may be too small to measure using your son's recurve shot by hand. (Of course, that is a hypothesis, too. "Is the difference between Tight Fight and spin wings too small to measure given x conditions...."  ) So you might want to test things that are likely to make more of a difference, something where the measurements will give results that are more than noise. That is what I mean by testing something your son is likely to be able to reliably measure.


----------



## Hank D Thoreau (Dec 9, 2008)

Unless your son is a very good shooter and the impact of vane changes is significant, it will take a very large number of shots to produce a statistically valid outcome, even for a school science project. Simple experiments, where you can control the variables, are best for school science projects (see Warbow). Think of all the factors that affect grouping. There are factors that will have a bigger influence than the arrow fletching. Also, the experimenter should ABSOLUTELY NEVER be part of the aparatus. It is too easy to influence the results...it is almost impossible to separate yourself from what you are doing so that you do not start anticipating the answer and making it happen. This is especially true in school science projects but also occurs in primary research (I have reviewed many school science projects and have seen it over and over again). You would have to have someone shoot the arrows that is unaware that there is an experiment going on, and you would still be subject to all the other factors in play. To be honest with you, there are too many issues with this experiment. I have seen many good science project; most good projects are simple, well controlled and make good use of the scientific method (no volcanoes please).


----------



## Hook'em79 (Nov 20, 2009)

Great comments, it does seem like there are too many variables. Another idea he had involved charting differences in arrow speed with a chronograph for various fletchings using his Olympic recurve setup. That would be easier to track and record. I agree that reducing an experiment to be as simple as possible is preferable. Any comments? Would a $100 chronograph from LAS be sufficient for this task?


----------



## Warbow (Apr 18, 2006)

Hank D Thoreau said:


> There are factors that will have a bigger influence than the arrow fletching. Also, the experimenter should ABSOLUTELY NEVER be part of the aparatus. It is too easy to influence the results...it is almost impossible to separate yourself from what you are doing so that you do not start anticipating the answer and making it happen. This is especially true in school science projects but also occurs in primary research (I have reviewed many school science projects and have seen it over and over again). You would have to have someone shoot the arrows that is unaware that there is an experiment going on, and you would still be subject to all the other factors in play.


I think that is probably especially true of any experiment where the skill of the performer--execution--can make more difference than the variables. At high levels archery is such a mental game that I think it would be quite susceptible to unconscious bias on the part of the archer. If the archer thinks a certain arrow is going to be more accurate it is quite possible that their subconscious will make it so. Human biases are hard to control for without blinding (keeping what kind of fletchings are being tested secret from the archer even when they are shooting them) and I think it would be impossible to blind an archer let alone double blind the experiment (keep both the archer and every experimenter the archer deals with from knowing what arrows the archer is testing until after the experiment is concluded).

I can't say there are too many issues with the experiment because I haven't seen a full proposal yet, but it is certainly the challenge with testing archery--hence why archery companies use shooting machines.


----------



## Hank D Thoreau (Dec 9, 2008)

Hook'em79 said:


> Great comments, it does seem like there are too many variables. Another idea he had involved charting differences in arrow speed with a chronograph for various fletchings using his Olympic recurve setup. That would be easier to track and record. I agree that reducing an experiment to be as simple as possible is preferable. Any comments? Would a $100 chronograph from LAS be sufficient for this task?


I have the ProChrono and it is quite good. Now I have found that chrono readings are very easy to manipulate, though it may be harder with a compound or a clicker. I shoot barebow. My compound finger bow readings are usually quite consistent. With a barebow, it is easy to overdraw trying to get a bigger number. You are probably going to want to measure initial velocity since shooting through a downrange chrono can be expensive (even good archers break arrows trying to shoot through the metal pig target). I would think that measuring right out of the bow would limit the observed impact of the fletching on arrow speed.


----------



## caspian (Jan 13, 2009)

Hank D Thoreau said:


> I would think that measuring right out of the bow would limit the observed impact of the fletching on arrow speed.


agreed. I would actually expect that to be the point of minimum variation.


----------



## archeryal (Apr 16, 2005)

You could test the ability of the vanes to spin the arrow as a proxy for stabilizing it - with a movable target (or steppong back each time), identify how far it takes for the arrow to rotate turn by checking the position of the index vane compared to its initial position. If a vane can get the arrow to spin faster than the other vane, it should help accuracy. Ideally, you'd want to compare this to its drag or arrow speed, especially outdoors, since arrow speed is not of much importance indoors.


----------



## caspian (Jan 13, 2009)

mmm... spin helps stability, but there's definitely limit where the shaft is adequately stabilised, and going faster gets you nothing but hurts you in speed.

that said, high speed photography has shown that an arrow is up to the max rotational spin the vane offset will impart in quite a short distance (something in the order of 3 arrow lengths, from memory) and once the shaft is spinning, no additional effort is required to keep it so - inertia will do that. once the vanes are planing through the air at a rotational rate that matches speed and offset, it's very similar to a flywheel effect. think of a propeller underwater with a flywheel attached to it. most of the drag is in the initial phase as the flywheel stores energy in the form of inertia, but after that (ignoring the drag a flywheel would have on its bearings, which an arrow doesn't have) the only energy extracted from the spinning object is from surface drag across the vanes, which is the same regardless of the offset used.

the upshot is that while a relatively high vane offset will result in a slower downrange speed, that is mostly due to the slower uprange speed as velocity is transferred into spin during the launch/acceleration phase.


----------



## xcreek (Aug 31, 2007)

This has been a great thread for good solid information. I would like to add one thing that IMO has been lost in this whole process. This project is to be completed by Hookems 14 year old son. Part of the learning process of these type of projects is for the individual to take it from concept to execution and many times during that process the true education is in the testing phase not the results. Learning the variables and how that affects the results. Some suggestions as was stated in some of the above post would include speed test, energy test and a simple but effective demonstration of how a compound system allows a let off of draw force. Just my take on the tread.

Mark Luman


----------



## Hook'em79 (Nov 20, 2009)

Jack thanks you all for your contributions, he's still engaged in a review of the literature but looks forward to beginning some testing soon! Again, many thanks!


----------

