The afternoon of the second day was a set of unconference sessions. Unfortunately, I did not take notes on these at all, so my memory is bound to be spotty.
The first session was “Venting About Social Media”, where a few of us talked about challenges with talking about asexuality on social media. I think this is a topic for a much longer post at some point in the future.
After that, I wandered into the tail end of the “Ace World Domination Using Glitter Bombs” session, but I have been sworn to secrecy about that one…
The second session was a combination of talking about Flibanserin and Asexuality Research, and I really wish I’d taken notes in that one. It started with a discussion about Flibanserin, what it is, what it isn’t, and how it works (or doesn’t). We also talked about its astroturf “grassroots” marketing campaign which is trying to create a demand for this pill, even though this pill isn’t necessarily all that effective, and how this campaign throws asexual people under the bus in the quest for profits.
After the Flibanserin discussion, the topic changed to asexuality research, beginning with the flawed “1% statistic”. There was a conversation about whether or not it was even useful to quote a number that is clearly inaccurate. Its source was the interpretation of some of the responses on a 20+ year old British sex survey, which has a few obvious flaws:
- It was a voluntary sex survey, so asexual people would be less likely to care enough to respond.
- The questions weren’t really about asexuality.
- Awareness of asexuality was even lower when the survey was done than they are now, so many respondents wouldn’t even know that asexuality was a possibility, so they would be more likely to confuse other types of attraction.
Some of the people in the session didn’t like using the statistic at all, while others viewed it as a starting point, and would say “At least 1% of people are asexual”. The wide variation of the prevalence of homosexuality that different surveys have come up with was also noted.
From there, the discussion turned to how to actually go about getting a more accurate statistic to use. Some notes:
- How do you run a survey about asexuality that will accurately gauge the prevalence of asexuality? How do you pick a sample, how do you get ace people to answer?
- Many people who are asexual don’t know that they are, so they won’t know to check the “asexual” box.
- There are things like the “Asexuality Identification Scale”, which can reasonably accurately tell if someone might be asexual, and which can be used in surveys. BUT… Is labeling someone in this way the right thing to do?
- Is there even a point to having a number? Is saying “Asexuality exists” enough?
There was also a brief conversation about “friendly” and “unfriendly” researchers. Some researchers have an open mind and will let their findings guide their work, while others have an agenda to disprove asexuality exists. So even when two groups are running very similar projects, the outcome can be very different.
And finally, if you are involved with a research study, and you have an issue with something, contact the study’s ethics review board. It is their responsibility to investigate claims, and they have the power to stop a study if there is a problem. The researcher might just ignore you, but the ethics board has to follow up.