Dear AppleVis Community,
AppleVis is continuing to evolve. We wanted to take a moment to share some recent updates to the AppleVis website over the last couple months:
- Deleting Comments:
- It is now possible for users to delete their own comments. Below each comment that you have posted, along with the 'Edit' link, you will now find a link for 'Delete'. When activated, you will be taken to a confirmation page. Press the 'Delete' button on this page to delete the comment.
- Whenever possible, we suggest using the 'Edit' functionality rather than 'Delete'. This is because completely deleting a comment will potentially leave any future comments to the discussion out of context. For example, if you post an answer to a question, someone replies to you, and you subsequently delete your comment... The other person's reply will still be there and may reference your comment that is now deleted.
- OpenAI Moderation API:
- To help ensure that AppleVis is a safe and welcoming place for everyone, we have today begun testing the use of OpenAI's Moderation API to augment our content moderation efforts.
- The purpose of evaluating Moderation API is to see if and how it can augment our manual moderation efforts--nothing more. AppleVis is open around the clock, and it is impossible for our team to monitor the site in real-time 24/7/365. If someone starts posting explicit or harassing content in the middle of the night, for example, it is our hope that the model would catch that and not allow those posts to go through. We are evaluating this technology to determine its strengths and limitations.
- In broad terms, the model behind OpenAI Moderation API checks for harassment, hate speech against people because of a personal characteristic, or content that is elicit, violent, discussing self-harm, or sexual in nature. The above link takes you to Open AI's Moderation API documentation if you wish to read the full list of content and the accompanying descriptions.
- If a post is flagged, it will not be published and the user will be notified.
- While we wish it weren't so, we fully expect that the model behind Moderation API will not always get things right. If you submit a post /comment and it is rejected, and you believe your content complies with our Forum Guidelines, please send us a message via our Contact Form. In your message, please be sure to include the full and complete subject line and body of the post which you were trying to submit.
- Forum Name on the Homepage:
- For forum topics on the homepage, the name of the forum is now displayed in parentheses along with other information about the post.
- We are aware of the need to have this information displayed on subsequent pages when accessing the "More Apple Posts" link. Our appreciation to the users who flagged this. We hope to have a resolution soon.
- For forum topics on the homepage, the name of the forum is now displayed in parentheses along with other information about the post.
- Menu Redesign:
- We have redesigned the main menu of the website. Each menu item is now an expandable button, with links to various sections of the website.
- Account information is now easier to access and can be located by navigating to the bottom of the Main Menu and expanding the 'Account' submenu/button. When logged in, you will have a 'My Account' and 'Log Out' links. When not logged in, the Account submenu will contain a link to login.
- We would appreciate your input on navigability of the menu, items included, item placement, etc. While the main design of the menu is fixed as part of our website's theme, items included, and where we place what, can easily be changed.
We appreciate and welcome any and all feedback on these changes.
Comments
a few thoughts
I'm not a fan of AI moderating things. I guess time will tell how it does. I do like the ability to delete comments.
Be my Eyes,
Thank you. :) that's all I'm gonna say. Great stuff there.
agree with dennis
i am also not a fan of AI moderating the sight cause i know AI can make mistakes some times
My thoughts
I appreciate the ability to delete comments and view forum names on the homepage.
I have the same reservations about AI moderation as other commenters, but I feel that the incorrect removal of posts is less likely to be an issue on AppleVis than some other forums because of the nature of the topics we discuss and the community AppleVis attracts. If time zones and moderators' availability are the impetus for AI moderation, I wonder whether AppleVis could consider diversifying its moderation team to have greater representation outside North America. This may have advantages beyond removing undesirable content from the site.
I have a few points of feedback concerning the main menu.
AI moderation
I think, as long as false positives can be flagged quickly, maybe by including a button in the email one receives rather than the contact form, to push it up to human moderation, this is a good idea. It's better to accidentally mute harmless content for a short time than let offensive stuff ride through, as I say, it needs to be easy for the poster to put it right without too many hoops to jump through.
Great work on the other stuff.
Next update, no subject lines for replies... Cheers. ;)
Overall
I am digging the updates to the website. I do agree with the poster who suggested that the account option be first, rather than the last, but otherwise I like the changes.
Sidenote, don't listen to anything Oliver says, that guy is whack!
True story. 😱
I Like These Changes...
Great job as always, and kudos to the team for continuing to make this wonderful resource known as AppleVis available to the masses. The moderation documentation is important, and it will be interesting to see how it works out over time. Thanks once again and keep up the awesome work.
Whackness
I'm old enough to remember when whack meant good... Which is how I'm taking this. Yes, I'm very, very whack. Chronically so.
Thanks for trying to do the best you can to improve our experien
Appreciate the Apple fitness team working so hard to make sure that we have the best experience possible. This is a great website and I hope to stay on here for a long time.
I agree with Singer Girl - thank you for your dedication
I will certainly offer feedback after trying your changes if I have anything worthwhile to offer. smile
Answers on AI Content Moderation
I appreciate everyone's concerns regarding our testing of AI tools to assist with content moderation. I want to offer some clarification about what we are trying to do (and not do).
First, I want to take a moment to address the elephant in the room: AI content moderation and concerns about potential censorship on AppleVis. While AppleVis does not take positions on issues happening outside of our mission and scope, we are also very aware that due to geopolitical factors, censorship is a concern for many in our community.
I want to expressly state that our only purpose in any moderation activities, be it manual review of content or scanning all new posts with an AI tool, is to help ensure that AppleVis is a safe, welcoming, and informative place for people of all ages, cultures, abilities, and lived experiences. While we do place some limits on the content that users may post, these limits are only to the extent necessary to achieve the aforementioned goal.
We are also keenly aware of how AI content moderation tools can "not" work. In September 2024, a post AppleVis shared to a major social media platform about iOS 18 bugs was removed, because the platform's 'technology' determined that we violated their guidelines on trying to obtain followers. How the 'technology' came to that conclusion is beyond me, but I share this anecdote to try and illustrate that we are aware of the pitfalls of AI content moderation and have every intention to avoid them. And if the disadvantages of using AI moderation tools outweigh the benefits, we will discontinue their use. Seriously.
With that in mind, here is what we are trying to accomplish:
It is our hope to try and see if/how AI can identify blatantly harmful content and not allow it to be posted to the site, and let users know in real-time. I liken it to having a smoke detector in your house or going through a metal detector at an airport. We do not want AI flagging posts that require human insight to understand their full meaning and context. We do, however, want to see if there is a way in which We can use AI to identify really obvious and clear violations of our guidelines.
For example, if someone uses the f-bomb or a racial slur in a post (things which are both definite violations of our guidelines), we want the AI tool to identify that and not allow that to be published to the website. If AI can be used to identify and prevent that sort of content from being published, again in those very clear-cut situations, I believe we will be better for it.
Our hope is that the eventual solution will be one where we can input our forum guidelines and prompt the model to evaluate newly-submitted content only against those guidelines. If a post is flagged (and our expectation is that not many posts will be), it will not go through, and the user will be alerted in real-time. As we explore different solutions, one thing we are looking for is something that will share with the author why specifically the post did not go through and present that information in a neutral, nonconfrontational way.
If after evaluation we determine that AI moderation tools are not a good fit for our needs (which is a distinct possibility), we won't use them. We have no interest in regulating speech on AppleVis besides what is necessary to make our community safe, welcoming, and helpful for everyone. If for example the model consistently flags content that is not a violation of our guidelines, this is not something we will compromise on. We will either get the technology to work for our needs, or we won't use it at all.
The one thing I do ask is that if something you post is wrongly flagged, please try to be understanding and allow us to solve the issue constructively with you. We appreciate that discussions about limits on what can and cannot be posted are sensitive topics for many.
Thanks,
Michael