Open Console — Human interface
The core of "Open Console" contains published information of various kinds from many different sources. Also, there are many different automated consumers of that knowledge. But there is also a special component: the "Open Console" web-interface. Via that interface, the website owner can query and publish information about his own website.
- Explain The Bigger Picture
- Human interface
- Skrodon
- Examples
Relations to a website
For simplicity, the documentation on this website usually speaks about "website owner". However, the real picture is more complex: an access management system is implemented which contains:
- (Proven) personal identities
- Domain-names
- Websites within the domain
- Sub-websites, for instance per language
- Sub-Website, separate pages and images
Right for correction
Via that Open Console interface, the owner of a website can find-out who has published information about its website. For instance: who publishes that the domain sends spam. According to EU law, digital services must open up their information about you on request. Digital services must also correct information when the addressed reports the mistake. Without Open Console, it is difficult to discover mistakes and hard to have it corrected. We hope that Open Console reduces people's fear for their internet privacy.
Connecting to publishers
Where Google Search Console will inform you what Google knows about your website, and Bing's Webmasters can tell you about Microsoft's handling, the Open Console interface will offer you a single point of contact to all publishers.
Realistic is the fear that many publishers will fight for the best visibility in the user's interface. Open Console has been designed to mitigate that issue: communication is on opt-in basis only.
Publishing your own information
Some of the "publishers" are actually information brokers: they provide an interface to the website owners to add their own facts to the knowledge base of "Open Console".
One example of such "publisher" which will let you specify the structure of
your website, which helps you set-up sitemaps and the robot.txt
,
which let you specify page change predictions. This information can be accessed
by "Crawlers" which extract website data for search engines.