A simple law of numbers to identify fraud… Benford’s Analyser

By Ben Russell,

Specialist Sales Consultant

Chair of Taxonomy Architecture Guidance Task Force, XBRL International

As part of our drive to showcase how innovative apps can be built on the True North® Data Platform, we’ve released a new at-a-glance fraud indicator tool. And it’s free on our website!

The design team behind this app were inspired by the principles of Benford’s law. This is an observation about the frequency distribution of leading digits in many real-life sets of numerical data, including financial accounts. In data that follows Benford’s law, leading digits are more likely to be low than high. So, given a large enough sample of financial data, numbers will start with a 9 on average 5% of the time, whereas numbers will start with a 1 more than 30% of the time.

 

This is an example of a good fit to Benford’s law:

To show this in action, the team created Benford’s Analyser app, a quick, high level check that can be used to help detect fraud through analysis of the numbers in financial statements. The free version allows you to apply the rule to leading digits in the latest filing for companies that submit annual and quarterly accounts to the Securities Exchange Commission (SEC).

As a statistical check, those using Benford’s law need to be aware of the implications of the results. Firstly, small sets of numbers are unsuitable for statistical analysis; secondly, a bad fit is not an indicator that there is fraud, rather that the numbers are not what would typically be expected. There are many company-specific reasons why this might be the case, of which fraud is just one. Those regulators, government agencies and auditors using the law may use this to decide where to look further.

As shown in the images below, Benford’s Analyser creates a chart from filings that shows the anticipated distribution of first digits – based on Benford’s law – using the numbers reported in the company’s quarterly and annual financial statements.

 

This is an example where the numbers do not fit Benford’s law:

In this platform tool, a chi-square value above 20.09 indicates that the numbers are only one per cent likely to appear in a normal set of accounts. The value shown (61.32) needs investigating to establish if fraud has taken place.

This is an example where there were too few numbers in the financial statement to apply this type of statistical analysis:

We’ve already had some great feedback for Benford’s Analyser from our latest rounds of innovation and customer outreach. Users were impressed with the immediacy of the model in creating tables for the filings under investigation. Why not have a go via the link below?

https://www.corefiling.com/benfords-analyser/

Tagged financial data, such as XBRL reports, has become more readily available. CoreFiling is responding. We’re listening to our customers, bringing other automated checks to the market.

This is an exciting innovation and a testament to the creativity of the designers at CoreFiling.

 

XBRL accounting taxonomy design and categorisation – Part 3: Coherence

In this series of articles, we propose a categorisation of taxonomies based on different aspects of their design.  Using this categorisation we look at the evolution of taxonomy design through three generations.

What is taxonomy coherence?

Our dictionary provides two definitions for ‘coherence’: the quality of being logical and consistent; and the quality of forming a unified whole. Both should apply to the architecture and design of XBRL taxonomies.

We said in the introductory article of this series that by coherent we meant a taxonomy that “hangs together” to produce consistent and comparable instance documents. That’s actually extending the concept beyond the taxonomy itself to the instance documents that can be created with it, but if we can’t guarantee to produce documents with those qualities then what’s the point of a coherent taxonomy?

Hypercubes and dimensions

One useful way of comparing the coherence of taxonomies is demonstrated by the graph below, which plots dimensions per hypercube for each of the three taxonomies we are examining (note that both scales are logarithmic):

The number of dimensions per hypercube is a good measure of how extensively the taxonomy uses dimensional data modelling to provide a unified data model focused around the relationship between concepts and the data aspects that apply to them.

The difference between US GAAP (green) and IFRS (red) is simply one of magnitude – the US GAAP taxonomy (345 hypercubes, 272 unique dimensions) is approximately three times larger than the IFRS taxonomy (112 hypercubes, 113 unique dimensions). However their ‘dimensions per hypercube’ profiles are very similar, starting at a peak with one-dimensional hypercubes and diminishing quickly as the number of dimensions increases. There is a large majority (70%+) of hypercubes with one and two dimensions in both taxonomies.

The profile for UK FRS (purple) is strikingly different. There is just one hypercube with one dimension, and only ten hypercubes with two dimensions. This suggests a radically different approach to the design of the taxonomy (212 hypercubes, 115 unique dimensions) and in particular the use of hypercubes to represent highly-dimensional data.

The graph implies that, for US GAAP and IFRS, most concepts have been modelled in isolation with a small number of specialised dimensions. In contrast, the UK FRS taxonomy has been modelled comprehensively as a whole, with widely applicable dimensions being applied across numerous relevant concepts.

We will now explore in more detail the reasons behind the differences between the taxonomies.

The IFRS taxonomy

We’ve already seen that the architectural underpinnings of the IFRS taxonomy are derived from the International Financial Reporting Standards themselves.

The chief consequence of this on the design of the IFRS taxonomy is that users of it are at liberty to interpret the framework it provides very broadly. This can be to the detriment of instance document consistency and comparability.

The IFRS taxonomy is intended to act as a foundation for electronic reporting regimes in IFRS-using jurisdictions around the world. The primary ‘users’ of the taxonomy in this case are most likely taxonomy architects tasked with creating extended versions of the IFRS taxonomy suitable for local reporting purposes. This means that there is a considerable effort required on the part of the extension architects to implement a level of consistency on top of the IFRS taxonomy itself.

This “standards-first” approach shows itself in that the IFRS taxonomy has the lowest average number of dimensions per hypercube of the three taxonomies we’re examining, at just under two. This is surely the result of attempting to model the low-dimension, presentation-oriented tables commonly seen in standards documents and in the corresponding financial reports. The taxonomy also has dimensions not associated with any hypercube; and some reportable concepts are not associated with any hypercube.

By way of a small but illustrative example, consider the IFRS Earnings per share hypercube (table) which has six separate primary items and a single dimension.

Primary items:

• Basic and Diluted earnings (loss) per share from continuing operations (2 items)

 • Basic and Diluted earnings (loss) per share from discontinued operations (2 items)

• Totals for both Basic and Diluted earnings (loss) per share (2 items)

Dimension:

• Classes of Ordinary Shares

There is also a “floating” dimension (axis) not associated with any hypercube – Continuing and discontinued operations – for breaking down continuing versus discontinued operations, which wasn’t utilized in the Earnings per share hypercube. Had it been, the number of Earnings per share primary items in the structure could have been reduced from six to two (Basic and Diluted earnings concepts for each of continuing, discontinued and (default) total dimension members). This demonstrates that, if a data-centric approach had been taken to model the taxonomy, it would have simplified and improved its coherence.

In summary, the IFRS taxonomy is not as coherent as it might be, and that impacts the consistency and comparability of instance documents created to adhere to it.

The IFRS taxonomy

The US GAAP taxonomy is nearly three times the size of the IFRS taxonomy in nearly all respects, but it associates all dimensions with hypercubes, even if around one third of all reportable concepts are not associated with any hypercube. This is an approach with greater consistency than that of the IFRS taxonomy, dimensionally-speaking, even though it provides far too many degrees of freedom to instance document preparers when it comes to these “free” reportable concepts, leading to documents that may not be entirely consistent with each other or wholly comparable.

Interestingly, however, it is on a par with the IFRS taxonomy in one important respect: the average number of dimensions per hypercube is only slightly larger, at just over two. This suggests that the hypercubes (or tables in US vernacular) in the taxonomy are primarily modelling the kinds of two-dimensional tabular presentations (for human consumption!) that one might see in a financial report or defined in an accounting standard (e.g. an axis of ‘concepts’ plus one or two dimensional breakdowns).

The “document-centric” approach of US GAAP therefore tends to produce a taxonomy design that yields data structures that tend to represent the conventional tabular presentations prescribed or presented as exemplars in standards documents and in common usage by preparers of financial statements.

The rigorous architectural underpinnings of the US GAAP taxonomy have resulted in a coherent taxonomy design, although one that does not lend itself to ensuring similar consistency in instance documents, particularly due to the usage of filer taxonomy extensions, as we will discuss in a forthcoming blog.

The UK FRS taxonomy

The average number of dimensions per hypercube in the UK FRS taxonomy is much higher than either IFRS or US GAAP at just over eight. This is a key indicator of a radically different architectural approach in which data modelling has taken centre-stage. As if to emphasise this, all reportable concepts belong to a hypercube, which is a very strong indicator from the taxonomy’s architect to instance document preparers of what is expected of them. There is a coherent dimensional framework in which each and every reportable concept unambiguously sits.

The result is a collection of highly-dimensional hypercubes tightly bound to reportable concepts. With judicious use of dimensions with default members, in the main, the “tagging” task for any given reportable concept is not onerous, but at the same time the full expressive power of the hypercubes can be brought to bear when the need arises. Reportable concepts are only valid in certain well-defined circumstances, and those hypercubes have been equipped with all the necessary dimensions, whether they are actually needed in any given circumstance or not.

The UK FRS taxonomy design is based on a thorough analysis of the data that is required to be conveyed by financial statements. This results in a more coherent data model since all the potential aspects (or “dimensions”) of an item of data can be considered holistically and independently of any traditional or prescribed presentation requirements. In this approach, the typical presentations’ one- and two-dimensional hypercubes (tables) tend to be represented by one- and two-dimensional “slices” through higher-dimensional structures, and there is little or no need for preparers to expand the existing hypercubes – something that can only be achieved via entity-specific taxonomy extensions.

The coherence of the taxonomy naturally assures the consistency and comparability of instance documents. It is a consequence of the taxonomy’s design that places no unreasonable demands on the ingenuity of taxonomy extenders or instance document preparers.

Conclusion

We have seen how the different choices in taxonomy design have influenced the coherence of the taxonomies under study and how this is illustrated by the dimension-per-hypercube metric.

In general, a coherent taxonomy should have a complete, consistent data model with full hypercube coverage, broadly-applicable dimensions and no unnecessary duplication either for dimensions or concepts. We have seen that these goals are most readily achieved by taking a data-model-first approach to taxonomy design. A coherent taxonomy leads to clear, unambiguous tagging and therefore clear, comparable instance documents with less opportunity for error.

If a taxonomy has a less extensive dimensional model, this requires extenders and/or instance document preparers to provide more interpretation of the taxonomy and to work significantly harder to produce consistent, comparable instance documents. This is by no means impossible, but some of the burden has been transferred from the taxonomy authors to taxonomy extenders and/or instance document preparers, who are less able to produce coherent, comparable data if they’re not equipped with the tools to do so.

In the next blog post we’ll cover taxonomy extensibility.

Filing Rules – the good stuff

CoreFiling’s Katherine Haigh and Joe Leeman are active contributors to XII’s Filing Rules Working Group. In this interview, we find out what motivated them and what insights they’ve gained that are useful either to regulators, filers or both.

Why did you join the Filing Rules Working Group?

Katherine: Part of my responsibility as Head of Quality Assurance is to check filing rules and filing manuals for our clients, so I have a good understanding of the things that work and the things that don’t. Any contribution we can make to improve the consistency and quality of filing rules not only benefits the market, but helps me in my job!

What does the Working Group do?

Joe: We’re analysing several filing manuals from a broad range of organisations in order to understand the current market practice. This work gives us a good insight into both the structure of filing manuals and also the content, the sort of rules that get included and how they are described.

Katherine: In addition, we’re looking for evidence and opportunities to automate the processing of filing rules as much as possible.

That’s interesting Katherine, what’s the reason for that?

Katherine: So, well-structured filing manuals and well-written filing rules lend themselves to automatic testing of reports and submissions. Having these processes automated not only builds consistency, it reduces the manual effort required by report preparers by giving them the opportunity to automatically test their reports prior to submission. It increases the level of first-time correct filings, reducing the workload on the regulator’s incoming processes.

Joe: Here at CoreFiling we offer validation modules to our customers that provide exactly that service – it’s much easier to write these and keep them current if we’re relying on well-structured manuals and well-composed filing rules. As Katherine said, this service enhances the reports preparation and submissions processes for our filer customers. In some cases, we provide the same automated validation software to both the regulator and their filers, meaning the filers can have 100% confidence of the validity of each report before submitting.

What other insights have you got from this work?

Joe: Interestingly, we’ve found a few anomalies that are potentially embarrassing for regulators and publishers of filing manuals!

Can you give some examples?

Katherine: Yes, we’ve seen examples where a regulator has clearly just copied content from another filing manual with no understanding of what the rule is supposed to achieve. It’s simply a copy-paste exercise. This is bad for the filer community, since it imposes an unnecessary constraint on their report, adding to the cost and effort, but adding no value to the filer or to the regulator.

Another example occurs in the Eurofiling space, where the supra-national regulator imposes a set of “should” rules in their filing manual. These rules might be, for example, to capture some useful statistical information that is not essential to the regulator. The National Competent Authority (NCA) then increases the severity of this rule to a “must” rule. All the organisations filing to that NCA are then required to submit the requested information in their reports or suffer having their report rejected.

As above, this process adds more effort and cost to the filers without creating any benefit to either filer or reporter. Though the supra-national regulator does always get the extra information they want.

So, there are definitely issues with current practice in filing manuals. Is it all bad?

Joe: Fortunately, no, it’s not all bad. We’ve seen some examples of great practice that, to be frank, should be adopted by all publishers of filing manuals. For example, those manuals that include a description of the goals, objectives and context for the filing rules are far easier to interpret and comply with. Rules that are properly arranged in a logical structure are also much easier to implement. For example, EBA categorise their rules into 5 classifications: filing syntax, instance syntax, context-related, fact-related and other. By doing this they allow the filer to see immediately where each rule should be applied.

Other examples are to have a consistent numbering system. Again, the EBA chooses a particular numbering system for its rules. Some NCAs just copy this numbering scheme, which we would support. Others choose to create a completely new numbering scheme, which adds to confusion and makes it difficult to compare rules and very difficult to automate them.

Katherine: We’d also strongly support the approach of differentiating the XBRL validation rules away from the guidance rules and rules relating to constraints in the submissions portal. This goes along with the idea that XBRL validation rules can easily be automated in the filing process. Guidance rules are more difficult, in that they can be difficult to resolve into clear “pass – fail” expressions. By separating out the constraints imposed by the submissions portal, it prevents some of the copy-paste rules being carried over unnecessarily.

Joe: And while we’re on the topic of good practice, I’d like to see proper versioning in filing manuals so it’s obvious what has changed when a new version of a filing manual is published. Ideally we’d see a consistent identification scheme where each rule is assigned a unique reference, whether a number or an error code, for life. This makes for ease in version control and change management.

In summary then, what guidance can you give to publishers of filing manuals and to the communities of filers that need to comply with the rules?

Katherine: I think for the publishers there are two key messages: firstly, to treat the creation of filing rules with the same rigour that is applied to taxonomy development and publication. The second is to walk a mile in the filer’s shoes, to better understand the limitations, confusion and wasted effort that occur when irrelevant or over-severe rules regimes are applied.

Joe: And for the filer community, it’s not to be complacent. Filers should require their regulators and authorities to provide clear unambiguous filing manuals containing only the rules required to assure successful submission and no more. Also to participate in reviews of any draft manuals published by the regulators, to question any rules that are not clear, and to put the onus on the regulator to explain the need for any specific rules to be applied.

Thanks for the insight, Katherine and Joe – I’m sure that is useful information for publishers and consumers of filing manuals. What’s next for you two?

Katherine: We still have some more filing manuals to review, and a recommendations and guidance paper to publish, hopefully in time for the Data Amplified Conference in November.

Joe: I’d really like to use the insights we have gained in this exercise to help publishers of filing manuals improve their practices and processes in filing rules. I can see the benefit that would bring to the publisher and, as importantly, to their filing community.

Artificial Intelligence: Applications Today, Not Tomorrow.

From reports of the DeepMind scandal a few weeks ago, to several talking spots at this year’s London Fintech week, July has arguably become Artificial Intelligence Month in the Fintech world. But while there has been a great deal of interest in future applications of AI, it seems that less attention is being paid to the AI solutions already out in the wild. Let’s take a look at the state of play for AI today.

Why the rise of AI?

It’s no secret that as computers get more powerful, and society gets more connected, organisations across all industries are feeling the pressure of too much data. Whether you’re gathering it, storing it, or trying to use it, modern data volumes present vast operational challenges. For example, Walmart, the world’s largest retailer, is reportedly building a private cloud that will process 2.5 petabytes of information each hour. That’s one company processing the equivalent of over half a million DVDs (or the estimated memory capacity of the human brain) each hour, every day.

The problem is bandwidth: a human being can only process so much information at once. Using Artificial Intelligence is a good way of dealing with such staggering amounts of data – and although we haven’t quite reached the I, Robot scenario, research into everything from deep learning to artificial neural networks continues to gather pace.

Applications today, not tomorrow.

One way that AI is helping to conquer the data mountain right now is through automation. The advantage of AI in this area is scalability – in theory, AI can learn how to recognise complex patterns of information that would normally require human understanding; AI-based pattern recognition has already been used to build surveillance cameras that can distinguish between people, objects, and even events.

At CoreFiling, we recognise that potential. That’s why we built AI-based automation right into our XBRL document creation tool, Seahorse. Seahorse learns how to interpret the fields in forms, and automatically tags the information, thanks to a form of machine learning called Logistic Regression. And because it’s hosted in the cloud, Seahorse benefits from every filing. Each and every document scanned refines its detection method, allowing Seahorse to achieve incredible levels of accuracy.

Click here to learn more about Seahorse.

A Wealth of Potential: Interview with Ian Hicks

Last week, CoreFiling’s Ian Hicks took part in the FRC’s Digital Future: Data round table, and discussed ways to combat the current “under-performance” of financial data. After the event, we sat down with Ian to talk about the benefits and challenges of using XBRL for Digital Future.


DAN: Is XBRL right for Digital Future?

IAN: Oh, absolutely. XBRL has applications around the world, and it’s flexible enough to meet the main goal of Digital Future, which to me is data versatility. The nice thing about XBRL is that it creates a kind of blank canvas with data, from which you can start to create solutions that meet the specific needs of each client or industry – that could be data automation, integration with other reporting media, and so on. That doesn’t mean it’s a perfect fit, though!

DAN: How so?

IAN: XBRL is a powerful standard, but it needs to be better at hiding complexity. CoreFiling already helped to address this in some instances when Philip Allen developed iXBRL, but XBRL itself still needs specialist applications to be useful.

DAN: You currently chair the XBRL Best Practices Board. Is reducing complexity something you focus on?

IAN: Oh absolutely, yes. But that relies on more than just developing XBRL. To reduce the complexity of an XBRL-based system, you need to take a holistic approach – develop working methods and processes that enhance customer experience, for example, or take advantage of new technologies.

DAN: What would be on your wish list for XBRL development?

IAN: I think the most useful thing for all applications, including Digital Future Reporting, would be to make XBRL a little more visual – more “renderable” – outside of specialist tools. Similar to how Microsoft plug-ins can be embedded in a browser. We’ve already started to address this with our Beacon platform, which renders XBRL instance data and displays it in a format that users can engage with.

From a more technical standpoint, I’d like to see features such as non-repudiation of instances – i.e. was the instance created by an authorized person, have its contents changed, is the date stamp correct, etc. More widespread use of auto-tagging would also be a great benefit to the preparer, accountancy and audit communities, both from an XBRL perspective and from the point of choosing the most appropriate concept.

DAN: How would those advances relate to Digital Future Reporting?

IAN: The Digital Future Reporting model is all about taking advantage of technology. The advantage of using XBRL is that it already supports features like auto-tagging – that’s how CoreFiling’s instance creation tool, Seahorse, is able to auto-tag filing documents, for example. The challenge is just to maximise the usage of these features.

DAN: Do you think XBRL is under-used in the fintech industry?

IAN: XBRL itself has an enormous user base around the world, but I do think its more advanced features are overlooked – which is why the Digital Future Reporting model is so key. But I think the most important thing is to deploy XBRL where it can be of maximum use. There was an interesting discussion in Dublin regarding “NOXBRL” – not a rejection of XBRL as it might imply, but rather the idea of “Not Only” XBRL. This discussion was around hiding the complexity of XBRL reporting from filers. It went on to cover the idea that XBRL is fundamental, but that other technologies should be included to meet the overall regulatory reporting need.

To give you one example, investment analysts have already developed sophisticated systems sourcing data in a multitude of ways, like “web-scraping”. XBRL has the capability to massively enhance these processes through its ability to rapidly and effectively analyse large sets of structured and unstructured data – this ability to enhance other technologies is what makes XBRL so useful. We’re already seeing this in practice in other industry sectors.

Linked Data is another example. Linked Data and XBRL appear to have progressed in parallel: there needs to be closer co-operation to benefit from using both technologies. This co-operation could result in massive benefits to the analyst community by simplifying the process of comparing information from seemingly disparate data sets.

DAN: What about outside fintech?

IAN: Extending XBRL beyond finance isn’t just possible, it’s already happening, and I fully encourage it!

DAN: Can you give us an example?

IAN: Non-financial data looks to me to be the next ideal candidate for XBRL. We’ve already seen something similar in the US, with the SunShot Initiative’s Orange Button Programme – XBRL is going to be used for solar data gathering and analysis. It’s a great idea, because solar data comes from so many different sources across America. You need to keep data like that in a single, consistent format to make it useful. Equally, organisations like AECA in Spain are pioneering the reporting and analysis of sustainability data using XBRL.

DAN: Do you have any advice for people already working with XBRL?

IAN: I think taxonomy developers should broaden their approach when developing a taxonomy. Rather than thinking just about the regulation to be met, they should also consider and take into account, far more thoroughly, the needs of the groups who will be consuming and analysing the data.

I’d also advise the filer communities to require the solution vendors to provide the means to simplify XBRL filing, to hide the complexity from users and report preparers. This could only encourage a more widespread adoption of XBRL in digital reporting.

DAN: Thanks Ian!

IAN: Happy to help.


To find out more about the FRC’s financial reporting lab, and the goals of the future data model, click here. And for information on CoreFiling’s XBRL services, click here.

EDITOR’S NOTE: parts of this interview have been shortened for clarity.

CoreFiling at the FRC’s Digital Future Round Table

This week, CoreFiling’s Ian Hicks joined over 20 other industry representatives in London for the Digital Future: Data round table, hosted by the FRC’s Financial Reporting Lab. Discussions focused on how XBRL can help facilitate Digital Future Reporting: a 12-step model proposed by the FRC that uses technology to combat the current “under-performance” of financial data – read more about Digital Future Reporting here.

The event was a great success, attracting attendees from key fintech organisations (including Vizor, IFRS, Workiva, the FRC, and the Bank of England). CoreFiling was on hand to provide important insights about XBRL and its capabilities – and as chair of the XBRL Best Practices Board, Ian also outlined how to follow best practice when implementing it.

 XBRL and the Digital Future model

As an open source technology, XBRL is a good fit for Digital Future, because it isn’t constrained to a single supplier. The standard is also well established, with many examples of large-scale implementation across the world (including tax ecosystems in the UK, Middle East, and now the USA); XBRL provides a proven framework for creating simple, cost-effective solutions, without needing to “reinvent the wheel” or develop brand new technologies.

As IFRS’s Rita Ogun-Clijmans noted during the discussion, “XBRL should focus on where it fits best into the Digital Future Reporting model” – CoreFiling agrees, as the strength of XBRL lies in its inherent auditability, data provenance, and versatility; XBRL can be linked with other reporting media to assist compatibility.

For more information about XBRL, iXBRL, and CoreFiling’s solutions, visit corefiling.com.

UK Government Joins the Party at London Fintech Week 2017

London Fintech Week is back for 2017, with a new line-up of conferences, workshops, exhibitions  and meet-ups for financial innovators. For those who haven’t attended before, London Fintech Week is a 7-day event designed to act as a hub for international financial professionals, held across the City of London, Canary Wharf and East London’s “Tech City”.

The event has gone from strength to strength since its inception in 2014 – and this year’s conference is the biggest yet. Starting on 7th July, London Fintech Week 2017 will host the UK government’s first International Fintech Conference. The conference aims to attract investors to the UK’s “high growth fintech centre”. Click here to read a comprehensive overview of the event by Liam Fox (Department for International Trade) on GOV.UK

Ready. Set. Innovate.

At CoreFiling, we know that developing new technologies is key to the success of the fintech industry – so here are the top 4 events we’re looking forward to most at this year’s London Fintech Week:

  • Regulatory Sandbox Panel. Barclays, Santander and Credit Suisse meet to unlock the potential of regulatory reporting. As experts in regulatory reporting technology, we’ll be watching with interest.
  • Blockchain & Cyber Security Showcase. Blockchain is set to be the next evolution in financial security – but how far can we trust the hype? Thomson Reuters, the Linux Foundation and Applied Blockchain investigate.
  • Machine Learning & AI: Is it OK to Panic Yet? With the recent Deep Mind scandal still fresh in our minds, we look forward to seeing this panel’s take on an important emerging technology.
  • The FCA’s Regtech Update & Innovation Hub Workshop. Innovation is at the heart of what we do at CoreFiling, and as long-time supporters of the FCA, we anticipate big things from this workshop.

A Bright Future for Orange Button & Solar Energy

CoreFiling recently joined XBRL US, the SunSpec Alliance, Wells Fargo and the US Department of Energy’s SunShot Initiative, to co-host the “Orange Button” webinar. This webinar focused on the role of XBRL in the US DoE’s Orange Button program – a solar energy development plan aimed at reducing solar energy costs (and growing the US solar industry).

Mark Goodhand, Head of Research at CoreFiling and a global authority on XBRL specifications, was on hand to give attendees expert advice about XBRL and its features. Solar energy is already big business in America, and a data-led approach is key to its growth; Mark showed that adopting XBRL will simplify and standardise solar data, aiding (and in some cases enabling) all aspects of the SunShot Initiative. The discussion of XBRL’s benefits covered everything from improved feasibility studies and financial projections, to better planning, smarter construction, and support for future research.

Chris Mills then put theory into practice, by taking the audience through a detailed demonstration of CoreFiling’s taxonomy development suite, Yeti.

The webinar was a real success, and gave the audience an exciting look at the bright future of solar development. Click here to watch the webinar on YouTube, or read a write-up on the XBRL US page. You can also get involved with solar energy by attending the InterSolar conference in July.

Launched: Beacon can solve your regulatory submission errors.

Filing rejections are a real problem for businesses that submit to regulators. Even if you have a solution in place to create your XBRL filings, there is no easy way to decode them, or to check what you’re actually sending. Using our 20+ years of experience in data integrity, we’ve created the solution:

CoreFiling is excited to announce the launch of Beacon: our cloud-based filing review platform. And to celebrate, we’re offering free trial access to Beacon for all users.

Beacon is a secure, collaborative review and validation tool that integrates effortlessly into your existing workflow. XBRL filings contain a lot of encoded information that you can’t see (or check) – but with Beacon, you can view that data in incredible, granular detail. The Beacon trial gives you access to Beacon’s advanced review tools: users can upload and review one XBRL document, completely free. Better yet, you can store your document in our cloud for up to three years… and review it as many times as you like.

Plus, we’re holding a free webinar for all filers, showing you how to avoid the most common errors in CRD IV (incl. IFRS 9) & Solvency II submissions.

Here are just a few of the ways that Beacon helps your organisation solve its submission errors:

The Benefits

  • Beacon lets you decode the XBRL filing. You can view your data inside a regulatory template, and see your filing as the regulator will see it. You can investigate broad sections of the filing, or drill down to individual data points, then apply targeted validation rules if you spot an error.
  • Beacon creates an advanced filing management system that’s flexible enough to fit right into your current workflow. Store all your filings in a secure, change-tracked environment. Control data access through custom user profiles. Import LDAP users and connect to your existing data sets with Beacon APIs.
  • Beacon promotes collaboration. Cloud access means colleagues can work together, anytime, anywhere. In fact, Beacon allows an unlimited number of users to view and mark up a filing at any one time. And thanks to Beacon’s cloud architecture, even the largest XBRL documents are accessed quickly – with no performance loss on your PC.

Download the PDF for more information about Beacon.

How do I sign up?

You can access Beacon right away by visiting our launch page, here. All you need to enter is your name, e-mail address and company. And don’t forget to sign up for our free Solvency II & CRD IV webinar too.

Announced: Seahorse® is the T4U Successor

After the recent EIOPA announcement that the XBRL reporting tool T4U will be decommissioned next month, many filers are now looking for a quick solution to keep their submissions compliant.

At CoreFiling, it’s our business to keep you compliant – that’s why we are proud to announce that we are offering a free trial to our cloud-based regulatory filing platform, Seahorse®. The successor to T4U.

This free trial gives you the opportunity to create one complete filing to submit to a regulator – and even better, users will have three months to explore the software before submitting their filing. Here are just some of the ways in which Seahorse® can help your organisation:

The Benefits

  • Seahorse® lets you create fast, error-free XBRL filings. Unlike T4U, its data rendering is XBRL-based, so the reports you send will never have data conversion errors or approximations. The data is 100% accurate every time.
  • Seahorse® is hosted in the cloud. Its architecture lets you update taxonomies instantly, with no tedious installations. You can create and view your filings anywhere, any time.
  • Seahorse® allows you to easily create XBRL filings in the familiar environment of Microsoft Excel.

How do I sign up?

Trial access is available to anyone. To claim your trial, simply visit our website and fill out the sign up form.