Register for a free trial
Tanker Shipping & Trade

Tanker Shipping & Trade

The trouble with observations

Thu 26 Oct 2017

The trouble with observations
David Savage (Oceanfile): “Use the data buried in SIRE to identify exactly where the shortcomings lie”

Oceanfile Marine’s David Savage offers a personal view of deficiencies inherent in the current approach to tanker vetting

Factors considered by oil companies to determine spot charter acceptability are many, complex and not always consistent. A vessel’s vetting history, the operator’s fleet performance, recent changes of operator, management, class, flag, structure and age considerations are all taken into account. Other factors include specific voyage risks (political/weather/war/piracy threats etc), casualty data, port state and TMSA results, and the financial status of the operator. In many cases, though, the decision to accept or reject is strongly influenced by acceptable SIRE results. An acceptable SIRE result is critical.

Factors considered by oil companies to determine spot charter acceptability are many, complex and not always consistent

Unfortunately, tanker vetting continues to be fixated by counting numbers of Observations as a measure of quality. Fleets are benchmarked and personnel rewarded/penalised based on this arbitrary and often misleading measure. In many cases an ‘acceptable SIRE result’ is often less a measure risk and more a raw count of the number of Observations contained in one or more SIRE reports. Two Observations in Chapters 4 (Navigation), 5 (Safety) or 6 (Pollution Prevention) are likely to result in rejection, regardless of the nature of those Observations.

The overzealous SIRE inspector frequently records numerous trivial Observations, forgetting that his role is to report details on the condition of the vessel and standards of operation observed to allow assessment of the vessel, preferring to impress his principals with numerous Observations that have no bearing on the risk a vessel might pose.

Post Erika, when oil companies were closely monitoring events in the Paris courtroom, it became apparent that their vetting processes were deficient. Until that time, use of SIRE generally relied on the review of a single report relating to an inspection conducted within recent weeks or months by a trustworthy oil company. The realisation that any one, or more, published and available SIRE reports might contain serious Observations that would result in a vessel’s rejection led to many oil companies deciding that all available SIRE reports for vessels of potential interest were to be analysed.

Post Erika, when oil companies were closely monitoring events in the Paris courtroom, it became apparent that their vetting processes were deficient

The preference for many, but not all, was to favour software solutions over the expertise of vetting superintendents. This led to the introduction of auto-vetting systems in conjunction with varying degrees of manual oversight.

Auto vetting uses algorithms where the VIQ questions are pre-assigned with Low-/Moderate-/High-risk values. Whenever reports are imported, the system applies the appropriate risk weighting to all questions where negative Observations have been made. These are added up to deliver the risk measure associated with the vessel.

These systems are flawed, though they are at least more effective than a simple count of the number of Observations. The reason they are flawed is that VIQ questions pre-assigned with High-risk values that are answered with a negative Observation will deliver a High-risk result. An Observation associated with a High-risk value question containing trivial content (eg “Oil marks were noted on the doormat to the vessel’s office”) will deliver an erroneous High-risk result.

This might easily result in the vessel being either rejected, or put on Technical Hold until such time as the operator can persuade the oil company vetting department to examine the report contents and have the vessel re-appraised.

At the very least, delays will occur and may result in a vessel being deemed non-acceptable to one or more charterers, and chartering opportunities may be lost. In such cases it is essential that the operator provides immediate and persuasive evidence to demonstrate that the evaluation is flawed.

Auto-vetting systems are smart but lack a crucial component - the Mark 1 eyeball of an experienced mariner. An accurate assessment involves evaluation of every report, confirmation that Observations are justified and that the automatically derived Low-, Medium- or High-risk result properly reflects the words of the Observations. For a big oil company, this is not a realistic expectation because of manpower constraints and the thousands of reports that are analysed each year. Vetting decisions are made by the system and it will be up to the Operator to convince the oil company to take another look when report contents do not warrant rejection.

Auto-vetting systems are smart but lack a crucial component - the Mark 1 eyeball of an experienced mariner

For an operator, use of SIRE reports as accurate measures of risk make it essential that every report is analysed relative to the risks identified, as well as the acceptability criteria of the oil company’s’ vetting expectations. It is essential that operators paying oil companies US$5,000 or more for inspections ensure that they are gaining maximum value for their outlay.

In assessing a report, if the potential risk value is appropriate to the Observation, nothing needs to be done. If the Observation is trivial but the question has been assigned as High-risk, then the potential risk value must be reduced. Conversely, if a VIQ question assigned with a Low potential risk value has an Observation response that warrants a High-risk value, the risk rating must be increased. After assessment, the report will deliver results that accurately measure the level of risk associated with the vessel.

Operators who recognised that SIRE reports contained huge volumes of information to help them identify weaknesses and shortcomings needed software to access/analyse this data and measure risks much more effectively. In 2013 Oceanfile was asked by users to provide the means to measure the risk levels associated with SIRE report Observations. Operators recognised the shortcomings associated with counting the numbers of Observations, and of pre-assigning High-/Moderate-/Low-risk values. They also wanted to assign responsibilities, measure ship and shore personnel performance, identify hotspots, assess equipment reliability, quantify risks, pinpoint shortcomings and direct remedial/corrective actions to prevent repetition. The Oceanfile Risk Assessment tools provide all of these as well as the means to adjust potential risk value as appropriate to the Observation.

Of course, cheating is easy and setting all Observation risk levels to zero will deliver risk-free results, but those tempted need to be careful. Those operators that thought that satisfying TMSA in 2004 merely required them to assess themselves at Level 3 or 4 for every element were embarrassed when undergoing their first TMSA office audits. The ‘Honesty is the best policy’ approach is crucial when dealing with safety, pollution prevention and risk. Anything less will have seriously misleading and potentially dangerous consequences.

When confronted by oil company auditors armed with their own statistics, operators need to produce very compelling data to demonstrate that these properly reflect fleet’s performance and assess the risks associated with each and every inspector Observation and for every inspection conducted. Cheaters will only fool themselves: they will find themselves and their company in embarrassing situations when auditors look into their results.

Those operators that are passionate about driving excellence, eradicating pollution and enhancing safety use the big data buried in SIRE to identify exactly where shortcomings lie and what corrective and preventative measures need to be taken. If risk ratings are honestly applied, they will deliver statistics to support the company’s policies that constantly drive improvements. In such cases they can be confident when confronted by auditors who demand “Show me – prove it!”

Recent whitepapers

Related articles





Knowledge bank

View all