Of course I could be laboring under a delusion Mr. KG, but I don’t think that I’m misinterpreting SET or other data presented here, but rather looking at it from a different perspective.
Indeed, I don’t disagree when you stated, “In applications where the sharp blade is a requirement, it all matters and matters a lot.” No doubt those types of applications exist, as well as other applications where even a small improvement in sharpness and edge longevity is significant. I believe I have often mentioned special use environments as an exception in my posts.
What is important to me, and how I interpret the data is from a “for all intents and practical purposes” point of view. In other words, general use knives and cutlery that I would use in my kitchen or around the house. These blades get to 250 quickly and probably won’t be resharpened until they are somewhere around 300 – 350. Sure, it would be great if all my blades were always 150 or sharper, but even though I hate to admit it, that’s just not what happens.
So, when I consider this stuff, it’s from a practical, general real-world use perspective. From that point of view I’m not very concerned about creating the most perfect edge in every detail, or the minutia of metallurgical and molecular details about sharpening. It’s interesting, but of little consequence when I’m in the basement touching up (umm.. slopping out?) my kitchen knives on my Kally.
It may be sort of a cave man perspective, but for the most part what I find myself doing in reality is to create a good useful edge in the most expeditious way I can. Make ‘em sharp (150 or less), make ‘em toothy, use them, and repeat as necessary.
I applaud your attention to detail and perfecting the ideal edge. It’s very impressive and your results speak to that. I’m sure your expertise is appreciated and respected by your customers.
I can understand that when you consider the test data your perspective is toward perfection and detail. When I view the same data I do so with less lofty, more Neanderthal oriented goals. Both are equally valid, just different.
Indeed, I don’t disagree when you stated, “In applications where the sharp blade is a requirement, it all matters and matters a lot.” No doubt those types of applications exist, as well as other applications where even a small improvement in sharpness and edge longevity is significant. I believe I have often mentioned special use environments as an exception in my posts.
What is important to me, and how I interpret the data is from a “for all intents and practical purposes” point of view. In other words, general use knives and cutlery that I would use in my kitchen or around the house. These blades get to 250 quickly and probably won’t be resharpened until they are somewhere around 300 – 350. Sure, it would be great if all my blades were always 150 or sharper, but even though I hate to admit it, that’s just not what happens.
So, when I consider this stuff, it’s from a practical, general real-world use perspective. From that point of view I’m not very concerned about creating the most perfect edge in every detail, or the minutia of metallurgical and molecular details about sharpening. It’s interesting, but of little consequence when I’m in the basement touching up (umm.. slopping out?) my kitchen knives on my Kally.
It may be sort of a cave man perspective, but for the most part what I find myself doing in reality is to create a good useful edge in the most expeditious way I can. Make ‘em sharp (150 or less), make ‘em toothy, use them, and repeat as necessary.
I applaud your attention to detail and perfecting the ideal edge. It’s very impressive and your results speak to that. I’m sure your expertise is appreciated and respected by your customers.
I can understand that when you consider the test data your perspective is toward perfection and detail. When I view the same data I do so with less lofty, more Neanderthal oriented goals. Both are equally valid, just different.

