From an overcurrent protection coordination standpoint, the minimum fault is the smallest fault current that represents a "real" short circuit for that system. Normally, something like 80%-90% of the minimum fault current is used to set a fast definite time or instantaneous element, i.e. you want the protection relay to trip as quickly as possible whenever there is a real fault. Anything lower than the minimum fault current is considered to be some other kind of fault and would be handled by other protection elements, e.g. inverse elements, thermal overload, stall protection, etc.
So the "minimum" part of the term refers to the overall system conditions that would lead to the lowest fault level (even a bolted short with zero fault impedance). This would include configurations and parameters that:
- Increase the upstream source impedance, e.g. n-1 contingency cases (single line, transformer, etc), minimum generation (in island systems), highest ambient temperature (i.e. higher resistances in upstream lines), etc
- Lead to the lowest voltages at the point of fault, e.g. transformer tap settings, shunt capacitor banks out of service / reactors in service, etc (in IEC 60909, this is assumed to be covered by the catch-all prefault voltage factor c)
- Have the least fault contributions from other sources, e.g. large induction motors out of service
Edit: some programs like ETAP also include manufacturer tolerances, for example transformer impedance tolerances of +/-10%. So you would use the negative tolerance for maximum fault and positive tolerance for minimum fault. The argument is that unless you have measured the impedance in the field, the manufacturer could be out by that much.