There is, and always has been, a “crisis” in legal education. The nature of the crisis in one jurisdiction is, however, so often familiar to those outside it: concerns about cost, about access, about ethics, about practice-readiness affect us globally. It is an equally common response, it appears, to investigate and seek to address those concerns by commissioning a review of some kind. And there’s the rub – these reports are of different kinds. They have different objectives, different approaches and will achieve different things. Commissioners and users of such reports need, I suggest, to have some clarity about these differences.
In the research seminar which I gave at NLS on 14th November 2018, I suggested an initial typology (or possibly taxonomy) of legal education reviews. Although there is clearly scope for hybrids, it seemed to me that there are, essentially, four models:
1 Model 1 is a collation of structural information from different countries: the “what” of legal education. The source information is from insiders who know their own jurisdiction, but the corresponding risk is that unless consciously monitored, it can go out of date very quickly. The IBA, for example, hosts such a collation designed principally for lawyers who wish to practise outside their jurisdiction. The EU has also collected information on lawyer qualification systems in its member states and federal countries such as Canada and the USA make information available to aspiring lawyers on an internal, state by state basis.
2 Model 2 is more evaluative, and something I have described as the “expert review”. Here, the most significant aspect may be the people carrying out the evaluation. They may have been selected for expertise, or because they represent and are trusted by, a particular group of stakeholders. It is not uncommon for foreign experts to be invited to participate. The results are not normally empirically analysed, but, by comparison with Model 1, trends are analysed and recommendations made. Such reviews can also, as in the case of the influential US Carnegie Report, generate useful concepts.
3 Model 3 is a consultative exercise. It operates deductively, seeking responses to a proposal put forward by a regulatory body. That proposal may be conceptual or in some degree of detail. The results are not necessarily empirically analysed and, of course, the exercise is neither a referendum nor a binding vote.
4 Model 4 is, by contrast, a consciously empirical exercise, operating deductively from a starting hypothesis, or inductively from research questions. It may be carried out by insiders or outsiders but the focus is on the process of data collection, qualitative and sometimes quantitative, and empirical analysis. That analysis, depending on the remit of the project, may include international investigation and literature review. The challenge for users, then is to remember that the project does have a limited, but defined remit, and to understand the nature of the data collected and presented.
There is much useful work being undertaken of these different kinds. A snapshot of publicly available reports in the period 2010-2013, for example, suggested that reports were being undertaken in (at least) Australia, England and Wales, Canada, France, Mauritius, New Zealand, Russia, South Africa, and the USA. Once we know what we have, and what we don’t have (substantial comparative studies, substantial longitudinal studies with some honourable exceptions) we can better, I suggest, identify the next phase of legal education review activity.
19th November 2018