Supervisory Powers over Financial Institutions


Since the onset of the housing market collapse, financial institutions have been failing at an astounding rate. As more banks begin to fail and taxpayer bailouts are passed by congress, one can only wonder if anyone is monitoring these institutions. How do we identify the causation of banking failure? Do we blame the financial institutions or the agencies that govern them for the failures? The answer may be somewhere in between and not easily defined by using current methods used in identifying failing financial institutions.


To help identify bank failures, regulatory agencies are provided with supervisory powers that allow them to examine financial institutions for safety and soundness. Are these supervisory powers that the agencies currently have adequate enough to identify a failing institution before its too late, or are the processes they use flawed? Furthermore, the examinations given to financial institution by their supervisory agency should be analyzed to further understand why the supervisory agencies fail to adequately predict banking failures.


An Increase in Bank Failures


In 2008, after years of relatively low numbers of bank failures (an average of just over 3 per year from 2000 to 2007), banks rapidly began failing. Twenty-five financial institutions failed in 2008 and this was only the tip of the iceberg (see Figure 1). In 2009, bank failures were up again, this time the industry had a total of 140 institutions that were closed. As of July 15, 2010, bank failures have reached a total of 90 for the year with no signs of stopping there. McIntyre (2010) and Scatigna (2010) are forecasting even higher numbers of bank failures this year. With high numbers of banking failures being forecasted in the near future, why could we not stop these failures from occurring and what are the supervisory agencies doing to prevent future failures from happening?

pastedGraphic.pdfpastedGraphic_1.pdf


Banks generally fail for one of two reasons. First, banks can become insolvent, forcing the supervisory agency to step in a close the institution. Insolvent banks occur when the bank’s liabilities become greater than their total assets. The second reason banks fail is due to becoming illiquid. Banks that are illiquid generally have assets that equal the amount of their liabilities; however the bank has a hard time liquidating these assets to meet consumer deposit demands or withdrawals. Rather banks are failing due to being insolvent or illiquid, the supervisory agency over each institution has be able to adequately monitor banks in order to predict their failure.


Bank supervision and bank regulation are two terms that are often confused as being one and the same. The Federal Reserve (2005) defines the terms as being “distinct, but complementary, activities.” Bank supervision involves three distinct actions: monitoring, inspecting, and examination. Each action reflects a key component in assessing the overall condition of banking organizations. Organizations found violating the laws that fall within the regulatory agency’s jurisdiction can have formal or informal actions taken against them to rectify the problems. Bank regulation involves the creation of regulations and guidelines that cover the day to day activities of banking organizations.


Authority to Impose Regulatory Enforcement Action


Regulatory Agencies have a vast range of powers that enable them to deal with troubled institutions with the end result of catching problems early in an effort to minimize more costly supervisory measures further down the road (Malloy, 2003). Some of the regulatory agencies with powers over financial institutions are the Comptroller of the Currency (OCC), the Federal Reserve Board (FRB), the Federal Deposit Insurance Corporation (FDIC), and the Office of Thrift Supervision (OTS). pastedGraphic.pdfpastedGraphic_1.pdf


Malloy (2003) states that the enforcement provisions of the Federal Deposit Insurance Act (FDIA) are “applicable regardless of the type of depository institution involved” and “is now a more or less unified body of federal enforcement provisions.” Figure 2 breaks down each supervisory agency’s enforcement actions over the last ten years.


The OCC has supervisory powers over national banks. The Comptroller is given the authority to examine these institutions through the Federal Deposit Insurance Act (FDIA). The FDIA allows the OCC a way to discover unlawful violations that are considered “unsafe and unsound practices” of national banks during examinations. Institutions under the supervision of the OCC may be required, but not limited to, pay “any deficiencies in capital” or increase the amount of capital that is required (Malloy, 2003). The OCC may also impose personal liabilities against management when violations under the National Bank Act occur.


The FRB has supervisory authority over state chartered banks that are members of the Federal Reserve System, Bank Holding Companies (BHC), Edge and agreement corporations, foreign branches of member banks, and other nonbanking activities of foreign banks (Federal Reserve's Publication Committee, 2005). The FRB, under the Federal Reserve Act, has the ability to impose civil money penalties upon the bank’s directors and officers.


The FDIC has supervisory authority over banks that are not members of the Federal Reserve System. The FDIC insures bank deposits up to a set amount and has special examination authority to determine the condition of an insured bank or savings association for insurance purposes (Federal Reserve's Publication Committee, 2005). The FDIC is the federally designated receiver which allows it to liquidate banks that become insolvent (Malloy, 2003).


The OTS has supervisory authority over savings associations that generally focus on residential mortgage lending (Federal Reserve's Publication Committee, 2005). The OTS also supervises federal savings associations along with companies that own or control other savings associations. Some of the enforcement provisions carried out by the OTS are administrative cease and desist orders, suspend or remove members of management, and impose civil money penalties (Malloy, 2003).


The regulatory agencies are further streamlined by the Uniform Financial Institutions Rating System (UFIRS). UFIRS was adopted by the Federal Financial Institutions Examination Council in 1979 and was introduced to create an evaluation and rating system for financial institutions. UFIRS is based on the evaluation and rating of six key indicators. The six components used are Capital adequacy, Asset quality, Management capability, Earning, Liquidity, and the Sensitivity to market risk; otherwise known as CAMELS (Rau, FDIC's Controls Over the CAMELS Rating Review Process (Report No. AUD-08-014), 2008). Even with the CAMELS ratings, supervisory agencies are not catching bank failures fast enough to prevent such an occurrence. This is evident when both the FDIC and the OTS examined IndyMac Federal Bank, FSB (IndyMac) of Pasadena, California.

IndyMac Federal Bank, FSB


The supervisory responsibility, over IndyMac, rested on the shoulders of the Office of Thrift Supervision (OTS). OTS closed IndyMac on July 11, 2008 and named the Federal Deposit Insurance Corporation as conservator (Office of Inspector General, 2009). According to the Audit Report published by the Department of the Treasury and conducted by the Office of Inspector General (OIG) states, “IndyMac’s failure [was] largely associated with its business strategy of originating and securitizing Alt-A loans on a large scale.” IndyMac had an aggressive strategy to increase profits, by using nontraditional loan products, insufficient underwriting, and borrowed heavily from costly sources. After 2007, in the mitts of the mortgage market decline, IndyMac was left holding $10.7 billion in loans. As the bank became illiquid, the situation took a turn for the worse when account holders created a “run” of $1.55 billion in deposits that left IndyMac with no way to liquidate their assets to cover their liabilities. The OIG made it clear that, “the underlying cause of the failure was the unsafe and unsound manner in which the thrift was operated.” If IndyMac’s business strategy is to blame for the closing of IndyMac, why didn’t the FDIC and OTS uncover this strategy before it was too late?

Rating the Banks


Each regulatory agency needs to be able to efficiently monitor the conditions of banking institutions that they supervise. Two ways agencies can achieve this goal is to conduct onsite and offsite examinations. The FDIC, under section 10(d) of the Federal Deposit Insurance Act (12 USC 1820(d)) mandates onsite examinations on an annual basis. This interval may be extended to 18 months for lower asset institutions and if the FDIC relies upon examinations conducted at the state level, then the process could be extended out to 3 years (Rau, 2002).

To better bridge the gap between onsite examinations, the FDIC uses several forms of offsite monitoring tools such as the Statistical CAMELS Offsite Rating (SCOR) review program, the Growth Monitoring System (GMS), and the Real Estate Stress Test (REST). In 2002, the Office of Inspector General conducted an audit titled, Statistical CAMELS Offsite Rating Review Program for FDIC-Supervised Banks; this audit was delivered to Michael J. Zamorski, Director of the Division of Supervision and Consumer Protection. The audit set out to determine the effectiveness of the SCOR review program. Upon completion of the audit several key factors began to emerge (Rau, 2002).

  • A time lag of up to 4 ¼ months exists between the date of the Call Report and the subsequent offsite review;
  • The SCOR system depends on the accuracy and integrity of Call Report information to serve as an early warning between examinations;
  • The SCOR system cannot assess management quality and internal control or capture risks from non-financial factors such as market conditions, fraud, or insider abuse; and
  • DSC case managers rarely initiate follow-up action to address probable downgrades indentified by SCOR outside of deferring to a past, present, or future examination.

The SCOR review program is dependent on the management’s ability to accurately submit the Call Report data (Rau, 2002) The FDIC’s assesses management’s ability through onsite examination and because onsite examinations can take place up to 3 years apart, management review is left to the integrity of the reporting financial institution’s integrity. Some of the comments from examinations over management quality were (Rau, 2002):

  • Board oversight and executive officer performance
  • Management supervision is unsatisfactory, and senior management’s ability to correct deficiencies in a timely manner is questionable
  • President and senior management engaged in new and high-risk activities without sufficient Board supervision, due diligence, and adequate policies.
  • Board supervision of the bank’s subprime lending is inadequate.

In 2008, another audit report was conducted by Zamorski titled, FDIC’s Controls Over the CAMELS Rating Review Process. The audit report states that, “the purpose of conducting a risk management examination is to assess an institution’s overall financial condition, review management practices and policies, monitor adherence with banking laws and regulations, review internal control systems, identify risks, and uncover fraud or insider abuse.” After the OIG audit was conducted, they concluded that the Division of Supervision and Consumer Protection (DSC) should revise the Case Manager Procedures Manual in order to better track changes in the CAMELS ratings. These changes are necessary in providing a more accurate evaluation and rating of an institution’s financial condition and operations (Rau, 2008). The OIG also believes that the CAMELS review process is still viable for detecting at risk banks.


Although onsite CAMELS ratings are said to be reliable, the rate at which the rating deteriorates varies. The DSC has also stated that, “that the SCOR system cannot assess management quality and internal control or capture risks from non-financial factors such as market conditions, fraud, or insider abuse (Rua, 2002).” If the lack of quality in a bank’s management staff are the leading causes of bank failures, then how can we improve upon an early prediction model’s ability to ferret out such moral hazards? Barr, Seiford, and Siems (1994) believe that by using the data envelopment analysis (DEA) as an early prediction model the accuracy of prediction is significantly increased. DEA is a management quality metric designed to give early prediction models the missing M in the CAMEL rating system. The past results from an analysis of 930 banks conducted over a 5 year time frame validates the metric and confirms that the quality of management is crucial to a bank’s failure (Barr, Seiford, & Siems, 1994) The research also revealed that the statistical data relating to the management’s quality could be seen “up to three years prior to failure.”

Summary


Regulatory agencies conducting supervisory and regulatory oversight over the financial industry must continue to improve upon the processes involved in predicting bank failures. Evidence indicates that the quality of a bank’s management is a leading indicator in predicting bank failures. Onsite evaluations are more reliable due to having a more accurate assessment over a bank’s management. Although offsite evaluations are not as reliable as onsite evaluations, they provide a way for supervisory agencies to fill in the gap between onsite evaluations. Evidence also indicates that early prediction models using a method that includes management quality will offer higher rates of predictability. Further analysis on improving the detection of moral hazard is warranted and should be conducted in order to improve the overall quality of our banking system.


Upon completion of this case study, another six banks failed on July 16th 2010, bringing the total of failed institutions to 96 for the year. Three of these bank failures were in Florida, a state that has been hit particularly hard when it comes to bank failures. The future of our financial industry maybe uncertain and reform may be the only option for many financial institutions on the government’s watch list.




References


Barr, R. S., Seiford, L. M., & Siems, T. F. (1994, Dec 1). Forecasting Bank Failure : A Non-Parametric Frontier Estimation Approach. Retrieved July 15, 2010, from SMU.edu: http://faculty.smu.edu/barr/pubs/bss-core.pdf

FDIC. (2010, July 9). Failed Bank List. Retrieved July 15, 2010, from FDIC.gov: http://www.fdic.gov/bank/individual/failed/banklist.html

Federal Reserve's Publication Committee. (2005, July 5). The Federal Reserve - Purposes & Functions. Retrieved July 15, 2010, from FederalReserve.gov: http://www.federalreserve.gov/pf/pdf/pf_complete.pdf

Lee, S. J., & Rose, J. D. (2010, May). Profits and Balance Sheet Developments at U.S. Commercial Banks in 2009. Retrieved July 15, 2010, from FederalReserve.gov: http://www.federalreserve.gov/pubs/bulletin/2010/pdf/bankprofits10.pdf

Malloy, M. P. (2003). Principles of Bank Regulation second edition. St. Paul, MN: West Group.

McIntyre, D. A. (2010, February 6). Bank Failures In 2010 May Hit 200, Up More Than 40%. Retrieved July 15, 2010, from 247wallst.com: http://247wallst.com/2010/02/06/bank-failures-in-2010-may-hit-200-up-over-40/

Office of Inspector General. (2009). Material Loss Review of IndyMac Bank, FSB (OIG-09-032) Audit Report. Washington, DC: Department of the Treasury.

Rau, R. A. (2008). FDIC's Controls Over the CAMELS Rating Review Process (Report No. AUD-08-014). Arlington, VA: Office of Inspector General - FDIC.

Rau, R. A. (2002). Statistical CAMELS Offsite Rating Review Program for FDIC-Supervised Banks. Washington, D.C.: Office of Inspector General - FDIC.

Scatigna, L. (2010, January 2). Financial Physician’s 2010 Forecast. Retrieved July 15, 2010, from thefinancialphysician.com: http://www.thefinancialphysician.com/blog/?p=1467



By: Joseph Dustin




0 comments: