TwitterCard meta tags are supposed to use the attributes "name" and "content".
OpenGraph tags use the attributes "property" and "content".
Twitter itself is smart enough to detect broken meta tags and discover the TwitterCard
using "property" and "content", but other platforms that only implement parsing of TwitterCards
and not OpenGraph may fail to correctly detect the tags as they're under the wrong attributes.
> "Open Graph protocol also specifies the use of property and content attributes for markup while
> Twitter cards use name and content. Twitter’s parser will fall back to using property and content,
> so there is no need to modify existing Open Graph protocol markup if it already exists." [0]
[0] https://developer.twitter.com/en/docs/twitter-for-websites/cards/guides/getting-started
Until now it was returning a 500 because the upload plug were going
through the changeset and ending in the JSON encoder, which raised
because struct has to @derive the encoder.
Some software, like GoToSocial, expose replies as ActivityPub
Collections, but do not expose any item array directly in the object,
causing validation to fail via the ObjectID validator. Now, Pleroma will
drop that field in this situation too.
When someone isn't a superuser any more, they shouldn't see the reporsts any more either.
Here we delete the report notifications from a user when that user gets updated from being a superuser to a non-superuser.
This will prevent a user with a large number of posts from negatively affecting performance of the outgoing federation queue if they delete their account.
The header name was Report-To, not Reply-To.
In any case, that's now being changed to the Reporting-Endpoints HTTP
Response Header.
https://w3c.github.io/reporting/#headerhttps://github.com/w3c/reporting/issues/177
CanIUse says the Report-To header is still supported by current Chrome
and friends.
https://caniuse.com/mdn-http_headers_report-to
It doesn't have any data for the Reporting-Endpoints HTTP header, but
this article says Chrome 96 supports it.
https://web.dev/reporting-api/
(Even though that's come out one year ago, that's not compatible with
Network Error Logging which's still using the Report-To version of the
API)
Signed-off-by: Thomas Citharel <tcit@tcit.fr>
Non-Create/Listen activities had their associated object field
normalized and fetched, but only to use their `id` field, which is both
slow and redundant. This also failed on Undo activities, which delete
the associated object/activity in database.
Undo activities will now render properly and database loads should
improve ever so slightly.
This fixes a race condition bug where keys could be regenerated
post-federation, causing activities and HTTP signatures from an user to
be dropped due to key differences.
As this plug is called on every request, this should reduce load on the
database by not requiring to select on the users table every single
time, and to instead use the by-ID user cache whenever possible.
There are two reasons for adding a GET endpoint:
0: Barely displaying the form does not change anything on the server.
1: It makes frontend development easier as they can now use a link,
instead of a form, to allow remote users to interact with local ones.
Some software, like GoToSocial, expose replies as ActivityPub
Collections, but do not expose any item array directly in the object,
causing validation to fail via the ObjectID validator. Now, Pleroma will
drop that field in this situation too.
The (request-target) used by Pleroma is non-standard, but many HTTP
signature implementations do it this way due to a misinterpretation of
the draft 06 of HTTP signatures: "path" was interpreted as not having
the query, though later examples show that it must be the absolute path
with the query part of the URL as well.
This behavior is kept to make sure most software (Pleroma itself,
Mastodon, and probably others) do not break, but Pleroma now accepts
signatures for a (request-target) containing the query, as expected by
many HTTP signature libraries, and clarified in the draft 11 of HTTP
signatures.
Additionally, the new draft renamed (request-target) to @request-target.
We now support both for incoming requests' signatures.
`context` fields for objects and activities can now be generated based
on the object/activity `inReplyTo` field or its ActivityPub ID, as a
fallback method in cases where `context` fields are missing for incoming
activities and objects.
Incoming Pleroma replies to a Misskey thread were rejected due to a
broken context fix, which caused them to not be visible until a
non-Pleroma user interacted with the replies.
This fix properly sets the post-fix object context to its parent Create
activity as well, if it was changed.
This field replaces the now deprecated conversation_id field, and now
exposes the ActivityPub object `context` directly via the MastoAPI
instead of relying on StatusNet-era data concepts.
This field seems to be a left-over from the StatusNet era.
If your application uses `pleroma.conversation_id`: this field is
deprecated.
It is currently stubbed instead by doing a CRC32 of the context, and
clearing the MSB to avoid overflow exceptions with signed integers on
the different clients using this field (Java/Kotlin code, mostly; see
Husky and probably other mobile clients.)
This should be removed in a future version of Pleroma. Pleroma-FE
currently depends on this field, as well.
30 to 70% of the objects in the object table are simple JSON objects
containing a single field, 'id', being the context's ID. The reason for
the creation of an object per context seems to be an old relic from the
StatusNet era, and has only been used nowadays as an helper for threads
in Pleroma-FE via the `pleroma.conversation_id` field in status views.
An object per context was created, and its numerical ID (table column)
was used and stored as 'context_id' in the object and activity along
with the full 'context' URI/string.
This commit removes this field and stops creation of objects for each
context, which will also allow incoming activities to use activity IDs
as contexts, something which was not possible before, or would have been
very broken under most circumstances.
The `pleroma.conversation_id` field has been reimplemented in a way to
maintain backwards-compatibility by calculating a CRC32 of the full
context URI/string in the object, instead of relying on the row ID for
the created context object.
This implements fully_qualify_emoji/1, which will return the
fully-qualified version of an emoji if it knows of one, or return the
emoji unmodified if not.
This code generates combinations per emoji: for each FE0F, all possible
combinations of the character being removed or staying will be
generated. This is made as an attempt to find all partially-qualified
and unqualified versions of a fully-qualified emoji.
I have found *no cases* for which this would be a problem, after
browsing the entire emoji list in emoji-test.txt. This is safe, and,
sadly, most likely the sanest too.
Tries fully-qualifying emoji when receiving them, by adding the emoji
variation sequence to the received reaction emoji.
This issue arises when other instance software, such as Misskey, tries
reacting with emoji that have unqualified or minimally qualified
variants, like a red heart. Pleroma only accepts fully qualified emoji
in emoji reactions, and refused those emoji. Now, Pleroma will attempt
to properly qualify them first, and reject them if checks still fail.
This commit contains changes to tests proposed by lanodan.
Co-authored-by: Haelwenn <contact+git.pleroma.social@hacktivis.me>
This was done by floatingghost as part of a bigger commit in Akkoma.
See <37ae047e16/lib/pleroma/application.ex (L83)>.
As explained in <https://ihatebeinga.live/objects/860d23e1-dc64-4b07-8b4d-020b9c56cff6>
> there are so many caches that clearing them all can nuke the supervisor, which by default will become an hero if it gets more than 3 restarts in <5 seconds
And further down the thread
> essentially we've got like 11 caches (37ae047e16/lib/pleroma/application.ex (L165))
> then in test we fetch them all (https://akkoma.dev/AkkomaGang/akkoma/src/branch/develop/test/support/data_case.ex#L50) and call clear on them
> so if this clear fails on any 3 of them, the pleroma supervisor itself will die
How it fails?
> idk maybe cachex dies, maybe :ets does a weird thing
> it doesn't log anything, it just consistently dies during cache clearing so i figured it had to be that
> honestly my best bet is locksmith and queuing
> https://github.com/whitfin/cachex/blob/master/lib/cachex/actions/clear.ex#L26
> clear is thrown into a locksmith transaction
> locksmith says
> >If the process is already in a transactional context, the provided function will be executed immediately. Otherwise the required keys will be locked until the provided function has finished executing.
> so if we get 2 clears too close together, maybe it locks, then doesn't like the next clear?
It is possible for an earlier Update to be received by us later.
For this, we now
(1) only allows Updates to poll counts if there is no updated field,
or the updated field is the same as the last updated date or
creation date;
(2) does not allow updating anything if the updated field
is older than the last updated date or creation date;
(3) allows updating updatable fields otherwise (normal updates);
(4) if only the updated field is changed, it does not create
a new history item on its own.
In Create validator we do not validate the object data,
but that is because the object itself will go through the
pipeline again, which is not the case for Update. Thus,
we added validation for objects in Update activities.
I renamed some tags before, but forgot to rename the pipelines
I also had some tags which I forgot to add to the config, description, etc.
These have now been done/added
I used keyword_list[:key], but if the key doesn't exist, it will return nil. I actually expect a list and further down the code I use that list.
I believe the key should always be present, but in case it's not, it's better to return an empty list instead of nil. That way the code wont fail further down the line.
During attachment upload Pleroma returns a "description" field. Pleroma-fe has an MR to use that to pre-fill the image description field, <https://git.pleroma.social/pleroma/pleroma-fe/-/merge_requests/1399>
* This MR allows Pleroma to read the EXIF data during upload and return the description to the FE
* If a description is already present (e.g. because a previous module added it), it will use that
* Otherwise it will read from the EXIF data. First it will check -ImageDescription, if that's empty, it will check -iptc:Caption-Abstract
* If no description is found, it will simply return nil, just like before
* When people set up a new instance, they will be asked if they want to read metadata and this module will be activated if so
This was taken from an MR i did on Pleroma and isn't finished yet.
I first focussed on getting things working
Now that they do and we know what tags there are, I put some thought in providing better names
I use the form <what_it_controls>_<what_it_allows_you_to_do>
:statuses_read => :messages_read
:status_delete => :messages_delete
:user_read => :users_read
:user_deletion => :users_delete
:user_activation => :users_manage_activation_state
:user_invite => :users_manage_invites
:user_tag => :users_manage_tags
:user_credentials => :users_manage_credentials
:report_handle => :reports_manage_reports
:emoji_management => :emoji_manage_emoji
Tries fully-qualifying emoji when receiving them, by adding the emoji
variation sequence to the received reaction emoji.
This issue arises when other instance software, such as Misskey, tries
reacting with emoji that have unqualified or minimally qualified
variants, like a red heart. Pleroma only accepts fully qualified emoji
in emoji reactions, and refused those emoji. Now, Pleroma will attempt
to properly qualify them first, and reject them if checks still fail.
Deactivated users are only visible to users privileged with :user_activation since fc317f3b17
Here we also make sure the users who are deactivated get the status deactivated for users who are allowed to see these users
Instead of `Pleroma.User.all_superusers()` we now use `Pleroma.User.all_superusers(:report_handle)`
I also changed it for sending emails, but there were no tests.