Author Topic: KNOT TESTING GUIDELINES - is IGKT best positioned to set fundamental guidelines?  (Read 3248 times)

DerekSmith

  • IGKT Member
  • Sr. Member
  • *****
  • Posts: 1518
  • Knot Botherer
    • ALbion Alliance
Presumably, having noticed the repeated occurrence of jamming on the force generating machine side of the knot, you repeated the tests with the orientation reversed in order to establish if there was any bias induced by your test rig?

agent_smith

  • Sr. Member
  • *****
  • Posts: 993
Quote
Presumably, having noticed the repeated occurrence of jamming on the force generating machine side of the knot, you repeated the tests with the orientation reversed in order to establish if there was any bias induced by your test rig?

Thanks for your interest Derek.
I would presume that simply flipping the test rig along the 'x' axis would not alter the results?
I would do an x axis inversion and also invert the knot.
Will repeat to 12kN again.
Knot was tied with S/S chirality (S twist interlinked with S twist).
I could also try Z twist interlinked with Z twist if you think its also relevant?

Xarax has commented that I should also test #1425A Riggers bend (without the X tail twist) as a control.
There is also another version of the Riggers X bend that Xarax has explored and suggests should be tested.
Link: http://igkt.net/sm/index.php?topic=4561.0

...

My focus will soon be shifting to #1410 Offset overhand bend.
We still dont have clear answers to the effect of rotation.
In the attached image I depict 3 different orientations (ABC).
I did a quick and dirty 'backyard test' today up to 2.5kN load...and was surprised to find that one of the rotations (A versus C) appeared to be quite stable in comparison to the other. I had my suspicions but was surprised by the initial result.
Was 'A' more stable or, was 'C' more stable?
Note 1: I consider 'B' to be the control.
Note 2: I stopped at 2.5kN because I knew from past experience that 3.0kN appears to be the jamming threshold (and I didn't want to risk damage from using tools to loosen and untie the structure).
« Last Edit: July 31, 2018, 04:42:52 PM by agent_smith »

DerekSmith

  • IGKT Member
  • Sr. Member
  • *****
  • Posts: 1518
  • Knot Botherer
    • ALbion Alliance
"Is IGKT best positioned to set fundamental guidelines?"

With the knot knowledge and expertise that is collected within the membership and Forum contributors, I believe the answer is without question  YES,  after all, if not the IGKT, then who?

BUT:  The IGKT title is pretentious, because the IGKT is not in fact a Guild.  It is not a Guild because it does not award levels of membership based on knowledge or achievement or services to Nodeology.  In truth, the IGKT is little more than a 'Mens Shed' for Knotters.

So, rephrasing the question slightly to 'Should the IGKT involve itself in setting fundamental guidelines?'  then, I believe my answer would need to be a resounding NO.  For this seeming contradiction, obviously some explanation is both due and indeed, necessary.

Let me start with a little background.  My career has been spent in Analysis, and for the major part I ran Analytical Laboratories covering the disciplines of Chemistry, Microbiology and Physical attributes.  During that time, I served on numerous Industrial advisory bodies and a similar role to the government, achieving both academic and professional accreditation.  When need arose to produce Guidelines on Testing and Standards of Operation, I was in a position to Chair working parties drawn from industry, to produce those guidelines and to expect them to be accepted by the industry as a whole.  In drawing up those standards and guidelines, the Working Parties would often draw upon the knowledge and advice of experts in highly specialist areas, who, although vital to the process, could not be expected to be able to draw up standards and guidelines acceptable to the industry at large.

So, based on that background, I frame my first question - "Who / what is the target industry that these Guidelines are to apply to ?"

If the target audience is nothing more than IGKT members, then it would be appropriate to draw a working party from those expert members.  But if that is the case, then the goal is incredibly myopic.  Hopefully though, the goal is to produce guidelines of value to the whole (or at least a major part) of the knot using industry.  If that is the case, then, for the guidelines to be acceptable to that industry, the working party would need to include respected representatives from its various branches.  It is rational for the IGKT to champion such a project, perhaps even initiate its funding, even provide a rich supply of specialist experts - But, the working party would need to be established from respected members of the target industry.  This is a lot of work and means starting the project in a 'different place', but without it, the good intentions are likely to 'Light their way to dusty death' (thank you Will.).

This is turning into a lengthy post, but one more aspect is worth throwing into the pot for consideration, and that, based on my experience as a professional analyst are the two 'Laws of Analysis'.

The first is that the result depends upon the sample, and the second is that we almost never test for what we wish to know...

Now, before you all start muttering "Stupid boy Pike (Derek)" for stating the obvious - that 'the result depends upon the sample' - you should by now, know me better - there is more to it than just the obvious.  Yes, the magnitude of the test result will be influenced by the value of the parameter in the test sample, but also, the nature of the sample will influence the choice of the test method, and test methods almost never measure exclusively the parameter you are interested in, and to compound the problem, they often measure some other parameter and infer the parameter of interest.  Of course, I don't expect you to believe me, so here are a couple of simple examples -

Moisture in cordage : If the sample is expected to be be high moisture, the analyst might used a vacuum oven at 70 C.  But this method measures anything that is volatile in a vacuum at 70C and does not measure any moisture that is tightly bound to the sample.
: if the sample is expected to be low moisture, the analyst might use Karl-Fisher reagent.  But again, he is not analysing moisture, he is analysing anything that reacts with hot Iodine, and hopes that this is mostly water.

So you see, yes the sample and the magnitude of its target attribute will directly influence the result, but it will also influence the choice of measurement technique which in turn will influence the inherent errors potentially present in the selected analytical technique.

This leads nicely to the second law - that we almost never test for what we want to know...  You have already seen this in the simple example of measuring moisture, but it gets worse.  We rarely stop to ask - what do we really want to find out?  Instead we ask - what can I measure?  then attempt to infer from those results a feel for the real issue that compels us.

So, my second question is - "What is the real thing we want to quantify?"

This leads us to HACCP, but I will cover that is a second post.

Derek

Dan_Lehman

  • Sr. Member
  • *****
  • Posts: 3768
per Dan Lehman:
Quote
But this looks like
...
I repeated the tests again today as follows:
...
[ ] at 12.0kN load: Jammed

I was unable to untie the test sample loaded to 12.0kN.

WHOA, I think you've mis-labeled "JAMMED"/<not> there:
for the SParts in what you're claiming are respetively (J/~J)
looking rather UNbound and more nearly bound,
and surely these are pretty good looks that you're
presenting in the nice pics.  The SPart you claim to have
been jammed sits with a decent bit of captured tail
keeping the collar adequately out from pinching it,
whereas one can see in the opposite side --which you
say was easily loosened-- that the collar had slid down
to more nearly --not entirely, but partly-- pinch the
SPart opposite another part of the knot.

Another note : there is a wee bit of daylight/space
visible in the side you claim jammed, between nipped
tail and alleged jammed SPart; and one can SEE the
full SPart from this space-point across the strand,
so there's no there there for it to be jammed against/into!
(And, frankly, although the situation is different on the
other end, even there it seems that there should be
ample uncovered-&-pinched-against SPart for it to
have been readily loosened; but I trust that you would
have done so had it been able.)


Thanks,
--dl*
====
« Last Edit: July 31, 2018, 10:31:40 PM by Dan_Lehman »

agent_smith

  • Sr. Member
  • *****
  • Posts: 993
Derek:
Thank you for your informed post.
Thats the first really constructive and well considered reply.

I would like to reinforce a point that I believe is crucial:
I had advanced that all testers could be classified into one of the following groups:
1. Hobbyist/enthusiast testers
2. Pseudo lab testers
3. Certified, nationally accredited test labs.

I believe that expectations of quality scale according to which category a tester identifies as.
I identify as a 'backyard tester'.
I don't have anything sophisticated...with the one exception being a 'load cell'.
Obviously, a tester needs to be able to measure force - but anything suitable could be used (it doesn't have to be a multi-thousand dollar digital load cell). Fishing scales used to weigh fish could suffice. Although the load capacity of the load cell would be a limiting factor in how much tension force you could safely generate. My end termination anchors are 2 trees growing in my yard!

My camera is a simple el cheapo compact digital  type - my daughters i phone takes better quality photos!

So I am not sure if you had thought in terms of these 3 categories of testers?
Note: There are a few on this IGKT forum that dislike the term 'tester' - presumably because they fear drawing undue criticism if they hold themselves out as being a 'tester'.
And the term 'backyard' is a metaphor - for informal locations where 'testing is conducted (it could literally be in a persons backyard, front yard, park land, a garage, a shed, inside in your living room!).

I would imagine that you wouldn't place the bar too high for a 'backyard' style tester?
But, I would imagine that you would have clear expectations of a certified, nationally accredited test lab?

I started this thread out of frustration for what appeared to be endless, mind numbing tests that examined nothing else but the MBS yield point of a knot (ie pull-it-till-it-breaks default mentality). Tests often appear to be knot A versus knot B in a pull to failure contest - with the winner being declared superior. Also, the myriad of inaccurate reporting on 'Bowlines' and 'Offset' bends (eg 'EDK') is prevalent and misinformation is parroted endlessly. So I felt compelled to do something...

Others have bogged down with statistical mathematics, consistency and repeatability - to the point where it seemingly went beyond the capabilities of a 'backyard tester'. I would expect a great deal of scientific rigor from a certified, nationally accredited test lab - but not hold a 'backyard' style tester to the same degree of rigor and expectation.

I personally think a tester should declare up front what tester category they identify as - and then expectations would scale accordingly.

EDIT NOTE:
I repeated a test by inverting the test rig and changing the chirality of the knot.
Same situation occurred: The collar adjacent to the force generating machine (ie a lever hoist) was most vulnerable to jamming.
After an initially jammed state was reached - the only way to loosen the structure was to use tools on the opposite collar. Once that collar was loose, it was then possible to work on the jammed collar - using considerable effort aided with tools - to finally work that collar loose. Time frames of around 15 minutes were require with tools to eventually succeed in loosening the structure.
« Last Edit: August 20, 2018, 01:30:47 AM by agent_smith »

agent_smith

  • Sr. Member
  • *****
  • Posts: 993
per Dan Lehman:
Quote
WHOA, I think you've mis-labeled "JAMMED"/<not> there:
for the SParts in what you're claiming are respetively

There is no 'WHOA' at all Dan.
Everything reported is factual and as I observed.
Jamming consistently occurred with the collar oriented to the same side as the force generating machine (a 2 ton lever hoist).
I am going to invert the 'test rig' and also invert the knot to see if jamming is still occurring on the collar facing the force generating machine (per Derek's recommendation).

I would respectfully request that you try this for yourself - load up a #1425A 'Riggers X bend' and see for yourself?

DerekSmith

  • IGKT Member
  • Sr. Member
  • *****
  • Posts: 1518
  • Knot Botherer
    • ALbion Alliance
Hi Mark,

Constructive discussion is a dance of sharing perspectives, preferably without alienating or offending fellow contributors when opinions clash.

When clashes occur, I find it sometimes valuable to attempt to explain why and how my perspective is different, yet hopefully remain sufficiently valid to be included and enrich the discussion.

With that in mind, I would like to attempt to paraphrase your Group definitions :-

Group 1 :  Ingenious, motivated amateur(s) with virtually nothing but household/garage equipment, some types of cordage and a knowledge of knots. [Remember Ashley developed and documented two highly reputable test systems using nothing more than timber, hinges and a bag of sand] .

Group 2 :  Group 1 plus access to some sophisticated but not formally calibrated force measuring equipment and recording systems.

Group 3 :  Accredited testing facility with certified calibrated Stress / Strain measuring and recording systems, operated by Technicians following detailed and rigorous methodologies.

I hope that you can agree that these definition expansions remain in accord with the classification you have advanced.

Now let us consider a simple investigation and frame it with the two questions I posed in my earlier post:-

Q1 - who is the target audience?  for this case it will be simply me, intending to climb a tree.
Q2 - what do I want to find out?  I have an 11mm kernmantle climbing rope for security and assist, and I want to know which of the three following knots would be better for my tie in point and why?  a) Carrick Loop,  b) Bowline, or c) Whatknot Loop.

Now, had my goal been to write a paper for my Masters, and all things being equal, I would have chosen Group 3 because it would have given me opportunity to detail levels of Accreditation and Calibration reports, along with copious amounts of data that I could have thrown through numerous respected statistical engines in order to be able to claim a statistically justifiable level of confidence in my findings.  It might even have been enough to win me a good grade.

But, if my goal was as I stated - to choose the best knot as a tie in, then Group 1 would be my best choice by a country mile.  They would be able to identify the parameters that mattered - ease and accuracy of tying, risk of mis-tying, proneness to jam, ease of release, response to snag loading and deformation under load, resistance to dressing structure deformation under flogging,..

One of the key tests here is going to be 'accuracy of tying' - the exact opposite of what has been proposed so far - i.e. the pedantic prescription of what the knot must be in order to be tested.  The tester in Group 1 would ask several people to tie each knot in order to determine a) the likelihood of tying a working (safe) knot and b) the likelihood of a mistied knot being identified before use; all this and not a digital meter in sight...

Result :- Carrick Loop with advisory to check the pattern of the Carrick mat before dressing the knot  - the other two, advisory not to be used based on increased risk of death.

Amateurs are often the pinnacle of expertise within their chosen field (amateur radio fans have lead the world in aerial design) and should not be dismissed through lack of modern test equipment. [Remember, ingenuity can help you calculate the circumference of the Earth by standing at the bottom of a deep well].

I hope I have been able to explain why I stand your list of competence of groups on their head.  I put output from Group 1, leagues ahead of mountains of calibrated data from Group 3.  It is a sickness of today's mindset that we must have calibrated data in order to make a qualified decision, and that a mountain of statistical analysis is a substitute for intelligence and understanding.

Derek



agent_smith

  • Sr. Member
  • *****
  • Posts: 993
Thanks Derek,
I concur with your comments and thought processes.

Quote
Group 1 :  Ingenious, motivated amateur(s) with virtually nothing but household/garage equipment, some types of cordage and a knowledge of knots.

So, what 'title' would you suggest for this group?
Just to be clear (and full disclosure), I had never intended the term 'backyard tester' to be derogatory or insulting. Its simply a metaphor. I (for example) identify as a 'backyard tester'.

Quote
I hope I have been able to explain why I stand your list of competence of groups on their head.  I put output from Group 1, leagues ahead of mountains of calibrated data from Group 3.
Perfectly understandable...and I would chime in that I had no intended hierarchical order in mind... that is, by listing them 1,2,3 - this did not mean that 'group 3' were by default superior to 'group 1' or 'group 2'. It was simply a way of classifying the different entities.

Quote
It is a sickness of today's mindset that we must have calibrated data in order to make a qualified decision, and that a mountain of statistical analysis is a substitute for intelligence and understanding.

Perhaps - though I have found in technical discussions with various authors of knot reports around the world, that they request references and citations to back up claims. Citing a 'backyard' test report doesn't hold as much weight as a report from a higher level source/authority. That is not to say that a backyard tester cant make a valuable contribution or produce worthy reports.

A question that you did not answer is the level of scientific rigor and expectation from each of the 3 classes of tester. I had advanced that expectations of scientific rigor scale according to the class of tester.
NautiKnots argued for scientific rigor - underpinned by consistency, repeatability and statistically valid sampling of data. NautiKnots also argued that external agencies such as 'The Cordage Institute' and the 'IEEE' could and should be consulted when devising knot test plans because of their expertise. I am unclear if NautiKnots had considered the class of knot tester when tendering his arguments?
 



DerekSmith

  • IGKT Member
  • Sr. Member
  • *****
  • Posts: 1518
  • Knot Botherer
    • ALbion Alliance
HACCP vs Statistics

Most of us have heard the phrase 'Lies, Damned Lies, and Statistics', but how can a branch of mathematics have acquired such a bad reputation?  The answer to that comes partly from a misuse of the interpretation of the results, and partly from a nasty habit statisticians have developed of 'deleting outliers'.  These 'outliers' are claimed to be recording errors or misreads or some other fault which creates faulty data, so they simply delete it from the data set, giving them a nice 'statistically significant' result.  But what if those outliers were real?

Take as an example a rope maker with a continuous process that occasionally introduces a fault that drastically reduces the MBS.  Regular QC checks will fail to see this occasional fault, but on the one occasion it happens to occur in the section of rope taken for testing, the very low result will likely call for repeats of testing which will of course all measure within the normal range.  The rogue figure is then likely to be put down to a mis-measurement, an outlier, and be deleted.  The risk though is that occasionally lengths of rope are sent out with a below standard MBS, and somewhere, sometime, one of these weakened ropes will be called upon to deliver its full expected strength and fail - possibly with fatal consequences.

It is human nature to seek nice tidy data sets and to steer shy of complexity.  It is this nature that leads us to reject 'problematical' data values and to set up our tests with rigorously uniform knots, in search of nice clean data sets.

Recognising the existence of these infrequent events and the unlikelyhood of their being detected by routine QC testing, has led industries to embrace HACCP - Hazard Analysis and Critical Control Points.

Using HACCP, this fictional rope manufacturer would observe that multiple spool changes might occur simultaneously which could lead to a single point weakening in the continuous rope making process.  This would be identified as a Critical Control Point and the manufacturer would set procedures in place to prevent simultaneous changes, or arrange for the faulty rope to be marked for removal.

The relevance for us knot testers is that we should be realistic in our choice of a range of possible tying forms as part of the range of actual variants this knot experiences.  Of course, this would go along with a resistance to considering any form of outlier deletion.  This makes the data set far messier, but will far better reflect the reality of the 'Knot space' that we are studying.

Of course, when we are comparing knots, it is these worse case outliers that we should be concentrating on, because they will occasionally be made and classed as the knot we are testing.


DerekSmith

  • IGKT Member
  • Sr. Member
  • *****
  • Posts: 1518
  • Knot Botherer
    • ALbion Alliance
Quote
So, what 'title' would you suggest for this group?

I neither like pretentious titles, nor denigratory ones, which is why I would naturally go for the likes of Group 1 etc.  but I take your point that a more descriptive name has merit.  How about :-

Amateur
Amateur Equipped
Professional Testing Facility. ?

But to be honest, I don't see much need to separate them, because if a test claims some measurement, then we need to be able to put some error limits on the values, they will simply be far tighter in the case of the Professional facility.

Quote
by listing them 1,2,3 - this did not mean that 'group 3' were by default superior to 'group 1' or 'group 2'. It was simply a way of classifying the different entities.

Yet by scaling the scientific rigour required across the three levels, you automatically accord greater credibility to group 3.

Shouldn't we expect the same level of rigour from all three groups, but simply expect group three to be able to claim a far lower error confidence than the other two, yet while an equally important aspect of rigour - knot competence - would be expected from all three groups, we would of necessity expect it to be completely lacking from group 3 reports ??

Quote
Perhaps - though I have found in technical discussions with various authors of knot reports around the world, that they request references and citations to back up claims.
 

This sadly is a commonplace form of arrogance and attempted superiority, practised extensively to cover up a lack of expertise in the key elements of the report.  It is where,if the IGKT were it truly a Guild, it would be of value to its members by giving recognised accreditation to individuals who have demonstrated exceptional contribution to the field - but alas, it is not to be.

[perhaps as a separate subject, knotters should petition the IGKT to award one of say three levels of accreditation to nominated individuals - say, Master, Associate and Graduate in either the Art or the Science of knotting ?]

Quote
NautiKnots argued for scientific rigor - underpinned by consistency, repeatability and statistically valid sampling of data. NautiKnots also argued that external agencies such as 'The Cordage Institute' and the 'IEEE' could and should be consulted when devising knot test plans because of their expertise. I am unclear if NautiKnots had considered the class of knot tester when tendering his arguments?

Yes, they should be included in the consultation, but when it comes to Knots - the experts are from within the IGKT - not the Cordage Institute and certainly not the IEEE, and I hope I have made my opinions of 'statistical validity' suitably clear - we must seek out and understand the outliers, for they are the knots which might kill or maim, they are our path to understanding the complexity of our field, and that is completely at odds with mainstream 'Testing' mindset.

Yes, we should have rigour, but it must be rigour relevant to the reality of the world of knots tied every day by people of very little knot awareness - that is our playing field, that is the grey goo that we continue to sift through for understanding.

DerekSmith

  • IGKT Member
  • Sr. Member
  • *****
  • Posts: 1518
  • Knot Botherer
    • ALbion Alliance
Proposition :-

Fundamental Guideline #1

Establish what it is that you want to know.  Do not be distracted by the 'how it might be determined', concentrate on formulating exactly what it is you want to know.

Only after you have formulated your goal should you then start to investigate what tests might yeild your answer.

Derek

Dan_Lehman

  • Sr. Member
  • *****
  • Posts: 3768
per Dan Lehman:
Quote
WHOA, I think you've mis-labeled "JAMMED"/<not> there:
for the SParts in what you're claiming are respetively

There is no 'WHOA' at all Dan.
Everything reported is factual and as I observed.
...
Okay, well, I didn't think there was no jamming,
only that your image belied which side.  But I see
now that the apparent "space"-revealing white
spec is some sort of reflection off of the rope tail and
not the space I'd thought --which sort of bit of space
can be seen in the less-loaded knot to the left and
at the other side of both SParts than is this midleading
spec.

Now, to try this in some different rope --i.p., something
that doesn't compress like the multi-stranded-kern'd
rope.

(-;

agent_smith

  • Sr. Member
  • *****
  • Posts: 993
per Derek:
Quote
Amateur
Amateur Equipped
Professional Testing Facility. ?

But to be honest, I don't see much need to separate them

I had considered using the descriptor 'amateur' but, it can imply a meaning that is unfair or unwarranted:
See this link to a dictionary definition: https://dictionary.cambridge.org/dictionary/english/amateur  (scroll down a bit to see this possible imputation; ..."someone who does not have much skill in what they do")

Perhaps a more suitable descriptor for backyard testers is; Hobbyist/Enthusiast tester ?

Full disclosure statement: Derek, I am providing a dictionary link not in an attempt to be derogatory or insulting toward you. I am merely pointing to an external source. There is absolutely no intention to be insulting in any way! I have to insert these disclaimers because I ran afoul with Mobius for quoting the dictionary - which he interpreted as being demeaning or derogatory. Just to be clear, I intend nothing of the sort!

So for the reason that the word 'amateur' could possibly be misconstrued - I chose not to use it.

In terms of a desire to distinguish between different classes of 'tester' (and here again is a source of irritation with the ambiguous distinction between a knot tester and a knot trialer):
I do think making a distinction is important.
I believe that expectations of scientific rigour scale accordingly.

I believe that some on this forum have apprehension of drawing criticism for their 'knot testing' efforts.
And so they shy away from identifying as a class of tester where expectations may be beyond their capabilities.
In my view, I think advances are made in a scientific field when others have a chance to peer review of try to reproduce results published. That is how science is done - someone tests and publishes - and then others can either confirm or refute the results.
Criticism is part of the process - but it is inevitable that some may have difficulty in accepting criticism. And if expectations scale according to your 'tester class' - setting a lower bar is a way of escaping this process.

If we look at past evidence and the current crop of knot test reports from around the world - it is clear that some are 'holding them self out' as being an expert. That is, you can read/download some reports from certain individuals - and it is clear that they are holding themselves out as possessing a special expertise. Readers often assume they are 'experts' - and accept their conclusions on face value.
Credibility plays a role - and some knot testers (mostly from a class of testers I refer to as 'pseudo labs' - or well equipped enthusiasts) - hold a certain level of professional credibility and can significantly influence the lay public. Examples of these pseudo lab testers are Richard Delaney (rope test lab) and Grant Prattley (Over the edge rescue). They regularly test and publish their results. I would not class them as enthusiast/hobbyist (aka 'backyard testers'). But they are not certified, nationally accredited tests labs.
So my view is that semi-professional (pseudo lab) testers like Richard Delaney and Grant Prattley must be willing to accept criticism and peer review of their published results - as they are publishing to the world - and people assume they are 'experts'.
Expectations scale accordingly - and I believe that a higher level of scientific rigour is warranted from these individuals than from enthusiast/hobbyist testers.

Richard Delaney (for example) also holds an Engineering degree from a university - which he further promotes as an integral part of his test lab. Such credentials impart credibility - which an enthusiast/hobbyist generally does not have (some may - but on balance, most enthusiast/hobbyist testers likely wouldn't hold Engineering degrees).

I would expect a much higher degree of scientific rigour from a certified, nationally accredited test lab (ie professional test lab). If this class of tester is publishing to the world, they must be willing to accept criticism via peer review. Certainly, NautiKnots arguments for scientific rigour would apply to this class of tester. They are generally well funded, have a purpose built test facility and can measure and capture data with sophisticated computers and software. There is usually an Engineer in residence at the facility.

For an enthusiast/hobbyist class tester, with very limited funds (meaning nearly zero $), improvised force generating machine and maybe some sort of force measuring device (a fishing scale?) - not to mention very limited spare time - scientific rigour is likely to be (at best) minimal. The ability of others to try to repeat their results (to confirm or refute) is probably not possible. For example, the cord/rope material is often the cheapest they can source - and likely doesn't meet any particular manufacturing standard. For another peer review tester living in a different nation, it would be near impossible to try to purchase the exact same identical material.

So I think we do need to distinguish between different classes of tester - and it is most certainly not intended to be demeaning, derogatory or insulting. Furthermore, differentiating classes of knot tester is also not intended to be insulting or to devalue anyone. It is simply a way of scaling expectations of scientific rigour.

EDIT NOTE
In relation to distinguishing between different classes of testers:
1. Hobbyist/Enthusiast
2. Semi-professional
3. Professional test lab

This avoids the term 'backyard' - which some may take offense to (even though it isn't intended to be derogatory or demeaning - it is just a metaphor).
« Last Edit: August 02, 2018, 08:23:40 AM by agent_smith »

DerekSmith

  • IGKT Member
  • Sr. Member
  • *****
  • Posts: 1518
  • Knot Botherer
    • ALbion Alliance
Quote
I had considered using the descriptor 'amateur' but, it can imply a meaning that is unfair or unwarranted:
See this link to a dictionary definition: https://dictionary.cambridge.org/dictionary/english/amateur  (scroll down a bit to see this possible imputation; ..."someone who does not have much skill in what they do")

I take your point on this Mark.  Here in the UK the distinction is more focused on payment.  If you are paid for your work you are professional and if you are unpaid you are Amateur, there is no denigration in perceived value.   This possibly stems from the fact that Amateur Radio hams are amongst the world's top experts in their field.  Often, because they are not constrained by the need to turn a profit, Amateurs are able to progress R&D way beyond that achieved by 'Professionals'.  Add to this the fact that Amateurs are driven by passion while Professionals are driven by wage and continued employment, and you might see that Amateurs are generally respected as the experts.

Still, I am not a word botherer,  and far more important to me is the unjustified and unjustifiable elevation in credibility you seem keen to accord to Professional Test Labs.     I have said it already, but it seems worth stating again - I have been there - my labs used state of the art equipment, 0.1DIN test equipment, NIST traceable standards, automated analysis equipment, direct data capture and latest generation Statistical Analysis software.  High precision, high accuracy, high repeatability using agreed methodology.  Yet with no Nodeologist present it might be nothing other than highly accurate rubbish, while a Knot expert in his workshop, with a ruler and a bag of sand or a 10ton jack (for comparative assessments) would be able to make seriously valid assessments of knot behaviour.

Put it another way - scalling precision without expert intelligence does not scale value.

Accuracy without Expertise is a sham of our modern mindset, and it should be our job to think of them as a 0.1 DIN hammer ...

There, I have explained it twice now.  You know where I stand and why.  I will now shut up on the subject.

Derek

DerekSmith

  • IGKT Member
  • Sr. Member
  • *****
  • Posts: 1518
  • Knot Botherer
    • ALbion Alliance
Quote
I believe that some on this forum have apprehension of drawing criticism for their 'knot testing' efforts.
And so they shy away from identifying as a class of tester where expectations may be beyond their capabilities.
In my view, I think advances are made in a scientific field when others have a chance to peer review of try to reproduce results published. That is how science is done - someone tests and publishes - and then others can either confirm or refute the results.
Criticism is part of the process - but it is inevitable that some may have difficulty in accepting criticism. And if expectations scale according to your 'tester class' - setting a lower bar is a way of escaping this process.

Point made.  New ideas and perspectives are more valuable than a 0.1DIN hammer.