Requirements Page Sandboxing
- Management of existing blocklists
- Granular control over blocking instances and users (mastodon already has?)
- Configurable restrictions for registration/invitation (what restrictions?)
- Robust moderation tools (should be list of tools, this is way too vague)
- One-click user banning from the timeline interface (seems way too easy to accidentally ban users with this, I accidentally boost shit like once a day)
- Prevent IP addresses from making new accounts (mastodon already has?
- Better prevention against spambot sign-ups (this should be moved to immediate high priority)
Chain-banning (i.e. ban someone and all of their followers) no
- Notes when blocking someone (e.g. reason for blocking)
- Control of where blocked/muted content is shown (allowing "nowhere")
- Time-based blocking and muting
- Display of mutual followers
- Conformance to post privacy (what does this mean?)
- Control of who can reply to a post; "don't @ me" mode
- Configurable blocking of federation of certain posts (lets call this 'local server only posts')
- Some way to address block evasion (i.e. by making an account on a new instance) (only way to do this I know of is to record all IPs used by a blocked acct and then autoblocking any account using that IP. Obviously that will probably block a lot of other random people)
- User blocking is two-way
- Screen-reader accessibility (including persistent suggestion of image descriptions) first half of this should be a high priority issue, second half needs further discussion)
- Motion sensitivity control (ensuring that flashing/fast movement doesn't happen by default, and allowing users to configure this)
- Differentiation between subjects (currently Mastodon's "CW" feature) and content warnings in general, encouragement of users to add subjects (I still don't see how this would work)
- Keyword-based hide and block (mastodon has a basic version of this)
- Ability to migrate existing Mastodon instance to fork code, up to a certain Mastodon version (Key Issue)
- Ability to serve media via S3 and similar services
High immediate priority
- Visually appealing interface (a fullscale interface redesign is not high priority)
- Local-instance-only privacy option
- Mutuals-only privacy option
- Configuration of federation (blacklists like Mastodon versus whitelists like awoo.space)
- Follow request notifications (does mastodon not have those?? How do follow requests work then)
- Performant web UI
- Low server resource usage
- Ease of deployment
- Translation of toots inline (not worth it)
- Conformance to ActivityPub standard (why is this 'long term planned'? We should never be breaking activitypub standard. We're explicitly an activitypub project)
- Creation of a spec for features not in ActivityPub, to ensure fediverse health (this is not a job for the code team)
General Post Features
- More advanced bios (e.g. follower-specific notes, pinned posts as part of bios, longer bios)
- Boosting/pinning posts of any privacy level (while preserving privacy)
- Controlling boostability separate from privacy (e.g. boostable private posts, non-boostable public posts)
- Public-only followers (dreamwidth does this)
- Robust list functionality (define robust)
- Ability to credit custom emojis to their authors (not sure of feasibility)
If we can get to it
- Total separation of frontend and backend (e.g. backend-only installations, swappable frontends)
- Purging of locally cached remote content (posts, media) and retrieval on demand from remote instances
- Subject line in posts (semantically different from content warnings)
- If a person you follow specifically blocks a person and includes a reason, then you get a pop-up with that reason when you try to interact with this person. Also you could set a threshold like, if X people I follow manually block this person, block them automatically for me and let me know about it.
- An abuser could use this to isolate somebody, sending mass alerts to their followers that <target> is bad and should be blocked, encouraging pile-ons and ostracism for accusations that may or may not even be true. It's the same as posting or boosting false information that would get said person blocked, just more insidious.
- More formally, let's say Bob blocks Anne. Bob can enter whatever he wants as the reason for the block. If Bob's followers trust him enough to follow him, they're likely to trust what he writes (whether or not it's true) and block Anne too, increasing her isolation. This is reducible to the problem of spreading false information. The blocking threshold can be easily overcome when X is too low (whether relatively or absolutely), or when a false call-out post gets enough attention. Having both of these would result in far-reaching effects across networks; users with a low X value would contribute towards surpassing higher X values set by other users.
- This would also be a source of information leakage: a fake account could be created to follow a particular person and interact with different accounts to "test" if said person has blocked them.
- Let block activities be routinely published and federated through the fediverse.
- A malicious person could start a custom instance with software for aggregating just those lists and publish them as harassment honeypots. They could even do the collection using what seem like ordinary Mastodon or Pleroma instances and publish them separately/anonymously, so that it isn't clear who is doing the collecting. Any group concerned about harassment would have to be extremely careful about what leaves the boundaries of an instance, and which instances they put their trust in. Any block/blocklist publication would become a feature waiting to be abused.