Human Collaboration

BEYOND IOT: BOTH A BUSINESS AND A SOCIETAL NEED

This site generally describes TrustCentral’s technology (through a variety of use cases) from the viewpoint of its application to the Internet of Things.  However there are significant uses of this technology beyond the IoT.  For example, applications that not only support authenticated and secure B2B and B2C interaction, but support authenticated and secure C2C interaction thereby providing significant social benefits.

TrustCentral’s non-IoT uses provide support for secure and private communications between authenticated individuals, group members, management collaborators, etc. and within established collaborative groups (and subgroups) in a variety of business, governmental, academic and social contexts.  This technology can achieve this while simultaneously maintaining high security of exchanged communications (typically encrypted) with the optional support of digital audit trails.

Group members share information.  Groups can be large (e.g., Facebook) or small (e.g., a bulletin board or a club).  Sometimes the civility of conversations within a group drops to a level that productive conversations become stifled.  That doesn’t support a civil society.  While robust dialogue should be part of any community, dialogue can degrade to the level of trolls, bullies, liars, etc.  Creative and/or effective people typically don’t want to be distracted by (or be forced to engage with) those types of unproductive interactions.  While personal anonymity is a valuable and intrinsic component of the TrustCentral design, the digital (anonymous) identity of an endpoint cannot be spoofed.  Both user trustworthiness and reputation (whether known or anonymous) can be scored for the benefit of all users.  (Refer to the GROUPS and TRUSTSCORE sections below for additional details as to how these integrated elements of TrustCentral’s technology can be applied for such purposes.)

A future threat to civil discourse across society and

within social and business groups

from AI (Artificial Intelligence) BOTS

A recent blog post by Elon Musk’s OpenAI project is concerning as it could portend a potential future where AI BOTS will be able to impersonate real people in a manner that is virtually undetectable.  The post, entitled: “Better Language Models and Their Implications”includes this description:

“We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.”  See https://blog.openai.com/better-language-models/ for more information

The post goes into technical nature and capabilities of this technology (which is not the subject of this page).  Many of the uses of this technology is potentially beneficial and productive for individuals, groups, businesses, society, etc. That is not what is being examined here. What is of concern to us at TrustCentral (and for which our technology can help mitigate) are the potential malicious purposes to which such technology might be put.  The OpenAI post considers this:

We can also imagine the application of these models for malicious purposes, including the following (or other applications we can’t yet anticipate):

  • Generate misleading news articles
  • Impersonate others online
  • Automate the production of abusive or faked content to post on social media
  • Automate the production of spam/phishing content

These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns. The public at large will need to become more skeptical of text they find online, just as the ”deep fakes” phenomenon calls for more skepticism about images.

Today, malicious actors — some of which are political in nature — have already begun to target the shared online commons, using things like “robotic tools, fake accounts and dedicated teams to troll individuals with hateful commentary or smears that make them afraid to speak, or difficult to be heard or believed”.

TrustCentral technology supports: authentication of remote identities; authentication of transmitted content (e.g., using digital signing); security of transmitted content/communications; TrustScores of identities; Reputation metrics of behavior and content from authenticated identities; etc.  This technology is an ideal system to, not only protect the integrity of dialogue, civility, trust and confidence between individuals, within groups and businesses, across society, etc., but also will prove to be effective in the detection and elimination of BOTS that may attempt to disrupt such activities.  TrustCentral looks forward to joining other like-minded stewards of a civil society in working to immunize our society and our mutual communications from trolls, bullies, liars, etc., whether human or AI BOTS.


THE TRUST STACK FOR HUMAN COLLABORATION


THE TECHNOLOGY

The TrustCentral system key include:

  • Secure, persistent digital presenceis authenticated for users of devices that support a device root of trust.Examples of such devices include: iPhones; computing devices with widely available TPM chips (Trusted Platform Module); and others, onto which a TrustCentral app is installed
  • Fundamental architecture of the TrustCentral system is based on secure, authenticated communication lines being established between users. These communication lines are built through the application of an Inviter-Invitee Protocol(supporting mutual authentication between remote endpoints) and through which authenticated, persistentSecure Communication Linesbetween user endpoints are established

INVITER-INVITEE PROTOCOL

The TrustCentral system’s patented Inviter-Invitee Protocol suite provides tools for an Inviter (e.g., a user or trusted partner) to vouch for the identity of an Invitee (e.g., another user) who successfully authenticates and completes the protocol, thereby allowing for the establishment of a secure communication line between the two endpoints.

SECURE COMMUNICATION LINES

Communication Lines are characterized by endpoints with context-specific identities that are typically governed by an end-to-end digital agreement.  They are auditable, brokered, trusted-relationships where such relationships/digital agreements can each stand-alone, for privacy purposes, or can leverage the build-up of identity confidence levels across relationships.  The TrustCentral system includes an attribute authority (AA) which acts as a trusted third party supporting users in order to establish each communication line by: (a) establishing identities of users; (b) uniquely associating keys to identities and their invitees; and (c) uniquely associating a certificate and digital agreement with each communication line.

UNIQUE AND MULTIPLE IDENTITIES

Typically individuals have different personas they use with their different relationships: professional; social; parental; an avatar etc. One of the features that makes the TrustCentral system unique is that the primary enabling security is based upon communication lines, not endpoints. This is why each user can have multiple identities: they can be anyone at the end of the communication line; the main thing is that the entity at each endpoint authenticates the other’s claimed identity and agrees to communicate. Thus a given user may have a plurality of identity profiles. Further, one identity may be established as the holder of a particular digital wallet.

SECURITY AND DOCUMENTS

The system supports the encryption and decryption of documents.  The system protects entry into the system’s software, and when desired, additional security to enter into specified encrypted documents. The system extends from there the ability to validate identities (through various proprietary and common methods) and score identities (proprietary methods).  The system is agnostic to the platform and or service that each user uses to transport or store their encrypted content.

OPTIONAL NOTARIZATION OF AN ACTUAL IDENTITY BY A TRUSTED PARTNER

Optionally, it may become valuable for a user to validate and establish a claimed (and verifiable) actual identity.  To do this, a user may utilize a notarization service.  After first installing the TrustCentral client app software, the user will establish a Secure Communication Line with a Trusted Partner Notarization Service within the system.  Upon installation, only an “unknown” identity (which is none-the-less unique and unspoofable) would be established through the initial application of the Inviter-Invitee Protocol.  Once the Secure Communication Line is established, the user (which may be an individual, business or other) may present physical or digital documents (such as a drivers license, social security card, birth certificate, passport or other) to the Trusted Partner.  It should be noted that the standard encryption and security capabilities available to users of the TrustCentral system allow the user to transmit any such documents to the Trusted Partner in an encrypted fashion such that only the Trusted Partner will be able to access them. The Trusted Partner reviews such documents and may require the user to answer challenge questions or provide biometric identification that allows their identity to be authentically established (this would be a more precise process than authentication steps commonly used by credit bureaus for online identity authentication).

Once an actual identity is validated and associated with that user, the Trusted Partner Notarization Service may digitally certify its authentication of that identity (and its specific association to that unique installation of the TrustCentral client app software) by generating a signed digital identity token (DIT), which it then provides to the user’s client app.  That DIT will include the user’s public key.  Such a certification may be recorded on a blockchain and/or on another record and/or simply be a digital token that is available to the user for its further use when asserted its authenticated identity.

GROUPS

TrustCentral’s technology for groups may also be applied for human collaboration. A group may establish what constitutes clearly defined, group-subscribed, anti-social behavior.  A group member violating such a standard could be removed from group participation.  For example, a user could make a setting that other users with a TrustScore lower that “10” (an arbitrary score) could not participate in group discussions with that user.  Lie to the group, bully others, break established rules of decorum, the user’s score could be lowered to a point where that user could be excluded from the group (effectively trolls, bullies, liars, etc. remove themselves from group participation).  New users might need to earn trust or be vouched for by a trusted member in order to join.

The TrustCentral team believes that social and business groups (as well as the broader society) will be better off if groups can benefit from a technology that supports agreed upon standards of civility.  The foundation of accomplishing such an end exists with the technology that Dr. Kravitz designed for TrustCentral.

TRUSTSCORE

From the inception of the design evolution of the TrustCentral system, the concept of “scoring” has maintained a central position.  Scoring can be a valuable tool for users to portray themselves, their activities, the handling of sensitive data as well as other metrics. Mechanisms have been designed by TrustCentral for a variety of scoring metrics (e.g., “TrustScore”, “Reputation Score” and others).

A TrustScore is a measure of a combination of one or more factors such as: length of time on the system, frequency of use, size of social network, level of verification/endorsement of the identity by other uses and/or entities as well as any trusted third party, veracity of information and data shared within the system, and others.  A TrustScore is an algorithm with various rules generally created and managed by the TrustCentral system, either privately or publically.  A use of the TrustScore technology may include recording the resulting individual user and/or entity TrustScores on a blockchain (or another public forum) for other users or entity to view and use as they see fit.  TrustScores might also be kept private and only disclosed to parties as specified by the user or entity to which the TrustScore is associated.  Or they may be managed in another fashion as deemed appropriate.

The TrustScoring mechanism enables users to portray their current level of trustworthiness even where they opt to present a (sanitized/none-full disclosure) identity profile that does not detail the basis upon which the TrustCentral system has gauged its assessment of trust. Identity profiles can be set up so as to be appropriate for the specific context of each pairwise agreement the user has with others users or entities. Each identity profile can evolve as relationships change and as new relationships are formed.

REPUTATION SCORE AND BLOCKCHAIN

While a TrustScore primarily measures participation, recognition of identity by others, etc., a Reputation score may reflect one user’s evaluation of how another user executed a task that they were expected to do.

This technology can be valuable within a blockchain ecosystem, particularly as it relates to associating the reputation of a unique blockchain wallet with that of an authenticated individual (or business) identity.

The TrustCentral system can leverage the immutability, transparency and availability of blockchain transactions in order to gauge, update and apply reputation scoring of individual devices and of humans utilizing devices.  For example, upon completion of specified tasks, users may rate one another’s performance on a blockchain.  Performance metrics of established communication lines may affect reputation of participating users/devices.  User reputation and device reputation is typically encrypted and selectively releasable (publicly, or confidentially to Validators and/or intended transaction recipients). Reputation thresholds as a condition of suitability of transactions may be used to determine if or how candidate transactions are processed, and may be set by use-case-specific policy, as enforceable by Validators of transactions submitted to the blockchain.  Invitees may check current reputation of inviters as a condition of acceptance. The existence of dedicated communication lines may be a prerequisite to entrusting others with properly handling sensitive data, and/or believing data.

We look forward to releasing this service in the future.