All -
Some perhaps relevant motor design history is in order.
In the early days, machines were expected to operate in a thermally-acceptable fashion at power loads up to the listed nameplate value. This worked just fine when the driven loads were primarily fans, pumps, short-run conveyors, and small compressors. The next step was to apply these same machines to industrial processes such as paper-making and metal rolling. All of a sudden, it became imperative to have enough "stock" on hand to see a company through a outage event (regardless whether it was planned or unplanned). To that end, the users asked the manufacturers to produce some equipment with some "extra" thermal capability. The result was the "service factor". The intent of this additional capacity was to enable the user to accomplish one of two things: either overload the machine for a few days before a planned outage so that there was extra "stock" on hand to meet demand, and/or to run at overload for a few days after the outage (planned or unplanned) to make up the "stock" shortfall. It was never intended to be a way to operate above nameplate on a continuous basis instead of uprating the capacity (or initially under-sizing the capacity) of the equipment to run the process.
However, that is indeed one of the things the term has become synonymous with. The other is that the longevity of the equipment can be reliably traced to the thermal stress on the winding(s). The thought process became something like "if we build in extra thermal margin, we're going to have extra life because we won't be pushing the machine as hard, so we won't need to upgrade/replace it for another 20 years". Hence the progression toward machines with a non-unity service factor being restricted to unity (or very slightly above unity) loading over the long term.
I partially agree with che: the IEC world does not CURRENTLY accept the concept of a non-unity service factor (or capability). However, this was not always the case, for the reasons given in the opening paragraph. As to the term "duty": IEC defines it as the following.
Duty is defined as being the load condition the machine is subject to, including (if applicable) the periods of starting, electrical braking, operating with no load, and rest, as well as their duration and sequence in time.
The key thing to remember here is that ALL these things must be taken into account in the machine design. An S1 duty with a minimal start/brake capability over the life of the machine may not be as robust when applied to an application with another duty (e.g. S4) compared to one that has been specifically designed to meet that S4 duty - which happens to have similar "continuous" loading and increased starting/braking cycles. The differences in design may be related process-imposed thermal, electrical, environmental, and/or mechanical stress.
Converting energy to motion for more than half a century