Human Collaboration


TrustCentral’s technology can be used to support the security and privacy needs of a variety of Human Collaboration needs.

Group collaboration through a “Community of Interest”

These can be social, professional or business groupings with one or more shared interest.  Examples might include the examination and alignment of: historical records; legal case precedents; published journals; social activities; medical treatments; etc.

Building an accurate, trusted, useful, thorough set of historical records on a subject

TrustCentral technology has particularly applicability for groups (such as “Communities of Interest”) when they are motivated to authenticate and secure digital records on a particular subject in order to establish an accurate, historical record of that subject.  Use of this technology is valuable during the process of aligning known data, by providing a secured capability of authenticating users (whether anonymously or not) in the community to contribute, thereby supporting the group’s discovery of correct data that is otherwise unknown, obfuscated or hidden.

TrustCentral platform capabilities

These include: authentication of user devices; authenticated, persistent, Secure Communication Lines; encryption (including key managements); digital signing; cryptographically-secured group identity and membership; TrustScoring, authorization of rights and actions; data immutability; and others

Protection from bad actors

The platform will protect users and data from sabotage; trolls; false data in general; malicious insertion of false data into good data; third parties that attempt to enturbulate a group or its operations; etc.

Civility within groups 

Groups can be large (e.g., Facebook) or small (e.g., a bulletin board or a club).  Sometimes the civility of conversations within a group drops to a level that productive conversations become stifled.  That doesn’t support a civil society.  While robust dialogue should be part of any community, dialogue can degrade to the level of trolls, bullies, liars, etc.  Creative and/or effective people typically don’t want to be distracted by (or be forced to engage with) those types of unproductive interactions.  While personal anonymity is a valuable and is an optional, intrinsic component of the TrustCentral design, the digital (anonymous) identity of an endpoint cannot be spoofed.


Both user trustworthiness and reputation (whether associated with a known identity or an anonymous unique identity) can be scored for the benefit of all users.  Acceptable and unacceptable behavior on the part of a user will cause its TrustScore and/or Reputation Score to be affected.  Resulting scores serve as a guide for other users for their choice of interaction.

Secure and private sharing of information

Security and privacy are supported between authenticated individuals, group members, management collaborators, etc. and within established collaborative groups (and subgroups) in a variety of business, governmental, academic and social contexts.  Through novel applications of cryptography, this technology is able to control content access at the document/digital file level which makes these capabilities available regardless of the user’s preferred method of document/file transport/exchange.  Ordinary email can be used for transport and is supported.  Also documents exchanged via in a cloud storage.  For verification, the system supports digital audit trails.

Protection from AI (Artificial Intelligence) BOTS being a Future Threat to Civil Discourse

A 2019 blog post by Elon Musk’s OpenAI project is concerning as it could portend a potential future where AI BOTS will be able to impersonate real people in a manner that is virtually undetectable.  The post, entitled: “Better Language Models and Their Implications” includes this description:

“We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.”  See for more information

The post goes into technical nature and capabilities of this technology (which is not the subject of this page).  Many of the uses of this technology is potentially beneficial and productive for individuals, groups, businesses, society, etc. That is not what is being examined here. What is of concern to us at TrustCentral (and for which our technology can help mitigate) are the potential malicious purposes to which such technology might be put.  The OpenAI post considers this:

We can also imagine the application of these models for malicious purposes, including the following (or other applications we can’t yet anticipate):

  • Generate misleading news articles
  • Impersonate others online
  • Automate the production of abusive or faked content to post on social media
  • Automate the production of spam/phishing content

These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns. The public at large will need to become more skeptical of text they find online, just as the ”deep fakes” phenomenon calls for more skepticism about images.

Today, malicious actors — some of which are political in nature — have already begun to target the shared online commons, using things like “robotic tools, fake accounts and dedicated teams to troll individuals with hateful commentary or smears that make them afraid to speak, or difficult to be heard or believed”.

Between TrustCentral technologies of: authentication; authorization; TrustScores of identities; Reputation metrics; etc., our technology is an ideal system to, not only protect the integrity of dialogue, civility, trust and confidence between individuals, within groups and businesses, across society, etc., but also will be effective in the detection and elimination of BOTS that may attempt to disrupt such activities.

TrustCentral looks forward to joining other like-minded stewards of a civil society in working to immunize our society and our mutual communications from trolls, bullies, liars, etc., whether human or AI BOTS.



The TrustCentral system key include:

  • Secure, persistent digital presenceis authenticated for users of devices that support a device root of trust.Examples of such devices include: iPhones; computing devices with widely available TPM chips (Trusted Platform Module); and others, onto which a TrustCentral app is installed
  • Fundamental architecture of the TrustCentral system is based on secure, authenticated communication lines being established between users. These communication lines are built through the application of an Inviter-Invitee Protocol(supporting mutual authentication between remote endpoints) and through which authenticated, persistentSecure Communication Linesbetween user endpoints are established


The TrustCentral system’s patented Inviter-Invitee Protocol suite provides tools for an Inviter (e.g., a user or trusted partner) to vouch for the identity of an Invitee (e.g., another user) who successfully authenticates and completes the protocol, thereby allowing for the establishment of a secure communication line between the two endpoints.


Communication Lines are characterized by endpoints with context-specific identities that are typically governed by an end-to-end digital agreement.  They are auditable, brokered, trusted-relationships where such relationships/digital agreements can each stand-alone, for privacy purposes, or can leverage the build-up of identity confidence levels across relationships.  The TrustCentral system includes an attribute authority (AA) which acts as a trusted third party supporting users in order to establish each communication line by: (a) establishing identities of users; (b) uniquely associating keys to identities and their invitees; and (c) uniquely associating a certificate and digital agreement with each communication line.


Typically individuals have different personas they use with their different relationships: professional; social; parental; an avatar etc. One of the features that makes the TrustCentral system unique is that the primary enabling security is based upon communication lines, not endpoints. This is why each user can have multiple identities: they can be anyone at the end of the communication line; the main thing is that the entity at each endpoint authenticates the other’s claimed identity and agrees to communicate. Thus a given user may have a plurality of identity profiles. Further, one identity may be established as the holder of a particular digital wallet.


The system supports the encryption and decryption of documents.  The system protects entry into the system’s software, and when desired, additional security to enter into specified encrypted documents. The system extends from there the ability to validate identities (through various proprietary and common methods) and score identities (proprietary methods).  The system is agnostic to the platform and or service that each user uses to transport or store their encrypted content.


Optionally, it may become valuable for a user to validate and establish a claimed (and verifiable) actual identity.  To do this, a user may utilize a notarization service.  After first installing the TrustCentral client app software, the user will establish a Secure Communication Line with a Trusted Partner Notarization Service within the system.  Upon installation, only an “unknown” identity (which is none-the-less unique and unspoofable) would be established through the initial application of the Inviter-Invitee Protocol.  Once the Secure Communication Line is established, the user (which may be an individual, business or other) may present physical or digital documents (such as a drivers license, social security card, birth certificate, passport or other) to the Trusted Partner.  It should be noted that the standard encryption and security capabilities available to users of the TrustCentral system allow the user to transmit any such documents to the Trusted Partner in an encrypted fashion such that only the Trusted Partner will be able to access them. The Trusted Partner reviews such documents and may require the user to answer challenge questions or provide biometric identification that allows their identity to be authentically established (this would be a more precise process than authentication steps commonly used by credit bureaus for online identity authentication).

Once an actual identity is validated and associated with that user, the Trusted Partner Notarization Service may digitally certify its authentication of that identity (and its specific association to that unique installation of the TrustCentral client app software) by generating a signed digital identity token (DIT), which it then provides to the user’s client app.  That DIT will include the user’s public key.  Such a certification may be recorded on a blockchain and/or on another record and/or simply be a digital token that is available to the user for its further use when asserted its authenticated identity.


TrustCentral’s technology for groups may also be applied for human collaboration. A group may establish what constitutes clearly defined, group-subscribed, anti-social behavior.  A group member violating such a standard could be removed from group participation.  For example, a user could make a setting that other users with a TrustScore lower that “10” (an arbitrary score) could not participate in group discussions with that user.  Lie to the group, bully others, break established rules of decorum, the user’s score could be lowered to a point where that user could be excluded from the group (effectively trolls, bullies, liars, etc. remove themselves from group participation).  New users might need to earn trust or be vouched for by a trusted member in order to join.

The TrustCentral team believes that social and business groups (as well as the broader society) will be better off if groups can benefit from a technology that supports agreed upon standards of civility.  The foundation of accomplishing such an end exists with the technology that Dr. Kravitz designed for TrustCentral.


From the inception of the design evolution of the TrustCentral system, the concept of “scoring” has maintained a central position.  Scoring can be a valuable tool for users to portray themselves, their activities, the handling of sensitive data as well as other metrics. Mechanisms have been designed by TrustCentral for a variety of scoring metrics (e.g., “TrustScore”, “Reputation Score” and others).

A TrustScore is a measure of a combination of one or more factors such as: length of time on the system, frequency of use, size of social network, level of verification/endorsement of the identity by other uses and/or entities as well as any trusted third party, veracity of information and data shared within the system, and others.  A TrustScore is an algorithm with various rules generally created and managed by the TrustCentral system, either privately or publically.  A use of the TrustScore technology may include recording the resulting individual user and/or entity TrustScores on a blockchain (or another public forum) for other users or entity to view and use as they see fit.  TrustScores might also be kept private and only disclosed to parties as specified by the user or entity to which the TrustScore is associated.  Or they may be managed in another fashion as deemed appropriate.

The TrustScoring mechanism enables users to portray their current level of trustworthiness even where they opt to present a (sanitized/none-full disclosure) identity profile that does not detail the basis upon which the TrustCentral system has gauged its assessment of trust. Identity profiles can be set up so as to be appropriate for the specific context of each pairwise agreement the user has with others users or entities. Each identity profile can evolve as relationships change and as new relationships are formed.


While a TrustScore primarily measures participation, recognition of identity by others, etc., a Reputation score may reflect one user’s evaluation of how another user executed a task that they were expected to do.

This technology can be valuable within a blockchain ecosystem, particularly as it relates to associating the reputation of a unique blockchain wallet with that of an authenticated individual (or business) identity.

The TrustCentral system can leverage the immutability, transparency and availability of blockchain transactions in order to gauge, update and apply reputation scoring of individual devices and of humans utilizing devices.  For example, upon completion of specified tasks, users may rate one another’s performance on a blockchain.  Performance metrics of established communication lines may affect reputation of participating users/devices.  User reputation and device reputation is typically encrypted and selectively releasable (publicly, or confidentially to Validators and/or intended transaction recipients). Reputation thresholds as a condition of suitability of transactions may be used to determine if or how candidate transactions are processed, and may be set by use-case-specific policy, as enforceable by Validators of transactions submitted to the blockchain.  Invitees may check current reputation of inviters as a condition of acceptance. The existence of dedicated communication lines may be a prerequisite to entrusting others with properly handling sensitive data, and/or believing data.


As this solution is built on a foundation of public key cryptography, it offers a fundamental capability that documents may be encrypted by a for a second specific user when that pair of users share an authenticated communication line.  Only that specific recipient/user will be able to decrypt and access such an encrypted document.  This same principle may be extended to the secure sharing of documents within a group.  A document may be encrypted such that only authenticated group members are able to decrypt a document intended for those group members.