I am college student, although this semester I’m traveling with a team from my school, Bob Jones University. Grad school applications feel very close. I’m studying for the Math GRE with the Princeton Review book. Ok that’s enough about me, let’s get to the good stuff.
Start date: June 15, 2018
[See website for background]
Earlier this week I finished generating the SCC polynomials; I used ~20 hours of AWS computing time to get up to the 16×16 case. Interestingly, the 14×14 case broke Mono (the C# runtime), and I had to rewrite that part of the project in C++. If you want the stack trace, let me know, it’s some wacky memory management thing; estimating that the application was using 25-30GB of memory at the time.
Was pleased to see that several of the polynomials match existing OEIS sequences 🙂 My favorite is that the number of size-60 components is the same as the number of possible knight moves on an (n – 2)x(n – 2) board (A035008).
Also, this yields a general formula for the size of the largest component:
I’m working on publishing an article about all this with a math magazine. I’ve never done this before so I’m paranoid about it falling through somehow, but I will post appropriate linkage if/when it’s live.
Media Bias Dashboard
Start date: August 9, 2018 (new)
Goal: create a live console to track political events and track reporting bias by news source. Inspiration: Bot Sentinel, Nick Diakopolous. Right now I’m just sorting out what my approach will be, but I’m very excited about https://newsapi.org‘s free API, and I’ll probably use SentiWordNet. I’ll try to automate the backend with AWS (I have a free EC2 instance just sitting there… :))
I’ll need to find all the articles about a given event in real time, across the major media sources. (For example today everyone has a token article about the “Space Force.”) I think I can do this by looking for unusual words that appear in multiple news sources and then using some kind of similarity score to find all the articles on the given topic. (Basically a probabilistic union-find algorithm.) I don’t expect too many false positives, and I probably don’t care about false negatives.
Then, we need to evaluate the slant – pro or con – of each article towards the subject matter. I’ll likely work with just the headlines and article summaries I can pull from News API. Rather than use machine learning or Mechanical Turk etc, I want to formalize “bias” using word sentiment, which side is getting quoted, whether the article is rebutting the subject’s statements, etc.
Also, I want this to be very user-friendly, probably with its own domain name.
I tentatively predict a beta release in October. Stay tuned! 🙂
Start date: July 4, 2018
[If I can get a better solution, I’ll do a better writeup. Fair? :)]
Suppose you have a 4×4 grid of playing cards – Ace, 2, 3, and 4 of each suit. You can swap any two adjacent (row/column) cards of the same rank or suit, and you can always swap an ace with the card above. Can you get the four aces to the bottom row?
(Not every combination of cards is solvable, but the vast majority are.)
The number of states is quite large, and so I’ve tried a breadth-first search that uses heuristics to pick a subset of the adjacent states to try. We assign a score to a given arrangement, and only look at new arrangements that are at least as high-scoring as the best score we’ve seen (minus a small fudge factor).
I arbitrarily define success as finding a solution in the first 100,000 boards considered.
This algorithm, as described, was solving about half my 30 tests. So I introduced randomness and various little tweaks and optimizations, tried a zillion different parameter values, and now I can solve (on average) about 83% of the test arrangements. (Some arrangements are solved 100% of the time, one was only solved 40% of the time.)
For your edification, the magic parameters are –
AceVal = 24, AceMoveDown = 9, AceMoveOver = 8, Lower = 15, QueueThrottle = 99, Luck = 0.896.
I want a better solution, but I’ll likely have to change my approach.