Friday, March 29, 2019

9 Best URL Shortener to Earn Money 2019

  1. Adf.ly

    Adf.ly is the oldest and one of the most trusted URL Shortener Service for making money by shrinking your links. Adf.ly provides you an opportunity to earn up to $5 per 1000 views. However, the earnings depend upon the demographics of users who go on to click the shortened link by Adf.ly.
    It offers a very comprehensive reporting system for tracking the performance of your each shortened URL. The minimum payout is kept low, and it is $5. It pays on 10th of every month. You can receive your earnings via PayPal, Payza, or AlertPay. Adf.ly also runs a referral program wherein you can earn a flat 20% commission for each referral for a lifetime.
  2. Wi.cr

    Wi.cr is also one of the 30 highest paying URL sites.You can earn through shortening links.When someone will click on your link.You will be paid.They offer $7 for 1000 views.Minimum payout is $5.
    You can earn through its referral program.When someone will open the account through your link you will get 10% commission.Payment option is PayPal.
    • Payout for 1000 views-$7
    • Minimum payout-$5
    • Referral commission-10%
    • Payout method-Paypal
    • Payout time-daily

  3. Clk.sh

    Clk.sh is a newly launched trusted link shortener network, it is a sister site of shrinkearn.com. I like ClkSh because it accepts multiple views from same visitors. If any one searching for Top and best url shortener service then i recommend this url shortener to our users. Clk.sh accepts advertisers and publishers from all over the world. It offers an opportunity to all its publishers to earn money and advertisers will get their targeted audience for cheapest rate. While writing ClkSh was offering up to $8 per 1000 visits and its minimum cpm rate is $1.4. Like Shrinkearn, Shorte.st url shorteners Clk.sh also offers some best features to all its users, including Good customer support, multiple views counting, decent cpm rates, good referral rate, multiple tools, quick payments etc. ClkSh offers 30% referral commission to its publishers. It uses 6 payment methods to all its users.
    • Payout for 1000 Views: Upto $8
    • Minimum Withdrawal: $5
    • Referral Commission: 30%
    • Payment Methods: PayPal, Payza, Skrill etc.
    • Payment Time: Daily

  4. Ouo.io

    Ouo.io is one of the fastest growing URL Shortener Service. Its pretty domain name is helpful in generating more clicks than other URL Shortener Services, and so you get a good opportunity for earning more money out of your shortened link. Ouo.io comes with several advanced features as well as customization options.
    With Ouo.io you can earn up to $8 per 1000 views. It also counts multiple views from same IP or person. With Ouo.io is becomes easy to earn money using its URL Shortener Service. The minimum payout is $5. Your earnings are automatically credited to your PayPal or Payoneer account on 1st or 15th of the month.
    • Payout for every 1000 views-$5
    • Minimum payout-$5
    • Referral commission-20%
    • Payout time-1st and 15th date of the month
    • Payout options-PayPal and Payza

  5. Linkbucks

    Linkbucks is another best and one of the most popular sites for shortening URLs and earning money. It boasts of high Google Page Rank as well as very high Alexa rankings. Linkbucks is paying $0.5 to $7 per 1000 views, and it depends on country to country.
    The minimum payout is $10, and payment method is PayPal. It also provides the opportunity of referral earnings wherein you can earn 20% commission for a lifetime. Linkbucks runs advertising programs as well.
    • The payout for 1000 views-$3-9
    • Minimum payout-$10
    • Referral commission-20%
    • Payment options-PayPal,Payza,and Payoneer
    • Payment-on the daily basis

  6. Short.am

    Short.am provides a big opportunity for earning money by shortening links. It is a rapidly growing URL Shortening Service. You simply need to sign up and start shrinking links. You can share the shortened links across the web, on your webpage, Twitter, Facebook, and more. Short.am provides detailed statistics and easy-to-use API.
    It even provides add-ons and plugins so that you can monetize your WordPress site. The minimum payout is $5 before you will be paid. It pays users via PayPal or Payoneer. It has the best market payout rates, offering unparalleled revenue. Short.am also run a referral program wherein you can earn 20% extra commission for life.
  7. Short.pe

    Short.pe is one of the most trusted sites from our top 30 highest paying URL shorteners.It pays on time.intrusting thing is that same visitor can click on your shorten link multiple times.You can earn by sign up and shorten your long URL.You just have to paste that URL to somewhere.
    You can paste it into your website, blog, or social media networking sites.They offer $5 for every 1000 views.You can also earn 20% referral commission from this site.Their minimum payout amount is only $1.You can withdraw from Paypal, Payza, and Payoneer.
    • The payout for 1000 views-$5
    • Minimum payout-$1
    • Referral commission-20% for lifetime
    • Payment methods-Paypal, Payza, and Payoneer
    • Payment time-on daily basis

  8. LINK.TL

    LINK.TL is one of the best and highest URL shortener website.It pays up to $16 for every 1000 views.You just have to sign up for free.You can earn by shortening your long URL into short and you can paste that URL into your website, blogs or social media networking sites, like facebook, twitter, and google plus etc.
    One of the best thing about this site is its referral system.They offer 10% referral commission.You can withdraw your amount when it reaches $5.
    • Payout for 1000 views-$16
    • Minimum payout-$5
    • Referral commission-10%
    • Payout methods-Paypal, Payza, and Skrill
    • Payment time-daily basis

  9. CPMlink

    CPMlink is one of the most legit URL shortener sites.You can sign up for free.It works like other shortener sites.You just have to shorten your link and paste that link into the internet.When someone will click on your link.
    You will get some amount of that click.It pays around $5 for every 1000 views.They offer 10% commission as the referral program.You can withdraw your amount when it reaches $5.The payment is then sent to your PayPal, Payza or Skrill account daily after requesting it.
    • The payout for 1000 views-$5
    • Minimum payout-$5
    • Referral commission-10%
    • Payment methods-Paypal, Payza, and Skrill
    • Payment time-daily

Dự Đoán Kết Quả Trận Pháp Vs Croatia Cùng Mèo Cass - Chung Kết World Cup 2018



Dự đoán kết quả trận Pháp vs Croatia cùng mèo Cass - Chung kết World Cup 2018
#thekittygamer, #dudoanworldcup2018, #meocass, #modric
---------------------------------------------------
*Mèo Cass dự đoán trận đấu chung kết World Cup 2018 giữa Pháp và Croatia.
*Mèo Cass đã dự đoán Croatia sẽ thắng trận đấu này trước Pháp.
*Chúc mừng tuyển Croatia nhé! Họ xứng đáng để có chức vô địch World Cup 2018 năm nay. Mong rằng họ sẽ chơi tốt để đạt được mục đích cuối cùng nhé!
---------------------------------------------------
Thấy hay thì nhớ like, share and subscribe. Xin cảm ơn!
*Website: http://thekittygamer.blogspot.com/
*Facebook:
https://www.facebook.com/thekittygamer88/
*Twitter: https://twitter.com/thekittygamer88
*Google+:
https://plus.google.com/u/0/+TheKittyGamer5/
*Pinterest: https://www.pinterest.com/cutecat060803/
*Instagram: https://www.instagram.com/cutecat060803/
---------------------------------------------------
*Thanks for watching!

*Nhận định bóng đá Pháp vs Croatia - Chung kết World Cup 2018
Source: Vietnamnet
Trận chung kết World Cup 2018 chưa diễn ra nhưng Pháp cứ như thể mặc nhiên được đặt "kèo thắng" trước một Croatia lần đầu làm nên lịch sử, trải qua 3 trận liên tiếp phải chiến đấu trong 120 phút.
Tại sao lại là Pháp?
Đúng 20 năm sau lần đầu vinh quang, được tổ chức trên sân nhà - France 1998, Didier Deschamps khi ấy là cầu thủ trong đội hình vô địch World Cup, giờ đây đang có cơ hội lớn để lần thứ 2 ẵm cúp, nhưng là trên cương vị thuyền trưởng của sắc áo Lam!
Chung kết World Cup 2018, Pháp vs Croatia: Sao lại là Pháp?
Pháp đang được ưa thích giành chiến thắng World Cup 2018, trước Croatia
Như một sự mặc định, nhiều người tin cúp vàng thế giới năm nay sẽ thuộc về tuyển Pháp, dù trận chung kết World Cup 2018 chưa diễn ra.
Đó không chỉ bởi cái tên Pháp có sức nặng hơn so với Croatia, mà còn vì họ có lực lượng thật sự đáng ngại - dàn sao tinh nhuệ, có giá trị đắt nhất giải đấu.
Và cả cái cách họ từng bước tiến vào trận chiến cuối cùng, chẳng là cơn lốc, thậm chí còn hơi nhạt ở vòng bảng, nhưng mỗi lúc một chắc chắn hơn nhờ những toan tính đầy khôn ngoan từ Didier Deschamps.
Qua 3 trận đấu knock-out, Pháp đều giải quyết gọn ghẽ trong 90 phút, chưa phải tiến vào đá hiệp phụ trận nào. Là trận thắng tưng bừng Argentina 4-3 vòng 1/8, trước khi loại tiếp đội Nam Mỹ khác, Uruguay với tỷ số 2-0 và khiến Bỉ không thể chơi đúng sức, thắng tối thiểu 1-0, đủ để lấy vé chung kết World Cup 2018.
Chung kết World Cup 2018, Pháp vs Croatia: Sao lại là Pháp?
Sau đại tiệc vào Chủ nhật này, sẽ chỉ có 1 bên còn giữ nụ cười chiến thắng...
Tờ CBS Sport đưa lên "cân" 2 đội trước trận chung kết, ngoài sở hữu nhiều tài năng hơn, đánh giá Pháp ăn đứt ở vị trí thủ môn: Hugo Lloris, lẫn hàng thủ với Umtiti và Varane so với Croatia - Vida và Lovren.
Bleacherreport nhận định, Pháp có Griezmann, Girout, Paul Pogba, Thomas Lemar, Dembele, Nabil Felir và Kylian Mbappe,... những cái tên đều có thể gây rắc rối cho hàng thủ Croatia, đặc biệt Mbappe với tốc độ đáng sợ.
Tờ này thậm chí còn dự đoán, Mbappe và Giroud sẽ ghi bàn mang về chiến thắng 2-0 cho tuyển Pháp, trong khi CBS Sport tỷ số là 2-1, và Pháp lên ngôi!

Croatia có những gì?

Người Anh có lẽ đã quá tự tin nên đã để Croatia lấy vé trên tay, sau khi tạo được thế trận thuận lợi cũng như sớm vượt lên dẫn trước. Như Luka Modric cảnh báo sau trận, đừng nghĩ chúng tôi mệt mỏi, hãy xem ai mệt hơn...
Chung kết World Cup 2018, Pháp vs Croatia: Sao lại là Pháp?
Pháp sở hữu đội tuyển đắt giá nhất giải
Trước bán kết với tuyển Anh, Croatia được cho bị hao tổn sức hơn vì phải chiến đấu 2 trận liên tiếp kéo dài 120 phút và đấu penalty với  Đan Mạch và chủ nhà Nga.
Nhưng Croatia đã không gục ngã, thậm chí còn tiếp tục chơi hay về cuối ở cuộc chiến lần 3, kéo dài 120 phút. Không đội nào tại World Cup 2018, phải đấu hiệp phụ nhiều như Modric và đồng đội.
Croatia được khen ngợi tinh thần tuyệt vời và chiến đấu không mệt mỏi. Nhưng qua 3 trận ấy cũng cho thấy, họ đã không thể định đoạt chiến thắng trong 90 phút, một thách thức lớn cho giấc mơ ẵm cúp của họ khi đấu với Pháp.
Theo CBS, tuyến giữa và sức mạnh tiềm ẩn tấn công chính là 2 thứ mà không thể xem nhẹ Croatia, dù họ không thực sự có ngôi sao trên hàng công.
Chung kết World Cup 2018, Pháp vs Croatia: Sao lại là Pháp?
Nhưng Croatia cũng có vũ khí của riêng mình
N'Golo Kante là rất cừ với tuyển Pháp, nhưng Croatia có Modric và Rakitic, những tiền vệ hàng đầu, và tốt hơn Pháp ở khả năng tham gia tấn công.
So với Pháp, Croatia không có sao sáng như Mbappe hay Griezmann nhưng lại cực nguy hiểm với khả năng "săn trộm" của Mario Mandzukic và Iran Perisic ở bên cánh. Sao Inter được dự đoán có thể là khác biệt ở chung kết World Cup 2018.
Lần đầu góp mặt ở chung kết World Cup 2018, Croatia mang đến những điều tuyệt vời của giải. HLV Zlatko Dalic và học trò chẳng có lý do gì để e ngại Pháp, dù họ không phải là lựa chọn ưa thích.
Liệu trước người Pháp, Croatia có chứng minh được nhận định và chọn lựa của số đông là sai, dĩ nhiên họ thừa quyết tâm để thực hiện. Nhưng ở trận chiến giành ngôi, chiến thắng chỉ đến cho người bản lĩnh, biết chớp thời cơ,... vậy nên người ta vẫn cứ nghiêng về Pháp là vì thế!

Grand Theft Auto IV


Grand Theft Auto IV is an action-adventure video game developed by Rockstar North and published by Rockstar Games. It was released for the PlayStation 3 and Xbox 360 consoles on 29 April 2008, and for Microsoft Windows on 2 December 2008. It is the eleventh title in the Grand Theft Auto series, and the first main entry since 2004's Grand Theft Auto: San Andreas. Set within the fictional Liberty City (based on New York City), the single-player story follows a war veteran, Niko Bellic, and his attempts to escape his past while under pressure from loan sharks and mob bosses. The open world design lets players freely roam Liberty City, consisting of three main islands.


The game is played from a third-person perspective and its world is navigated on-foot or by vehicle. Throughout the single-player mode, players play as Niko Bellic. An online multiplayer mode is included with the game, allowing up to 32 players to engage in both co-operative and competitive gameplay in a recreation of the single-player setting.[b] Two expansion packs were later released for the game, The Lost and Damned and The Ballad of Gay Tony, which both feature new plots that are interconnected with the main Grand Theft Auto IV storyline, and follow new protagonists.


Thanks for Reading......

Oceanhorn Comes To Steam March 17Th!



We have been working on something special these couple of months and now Oceanhorn is destined to debut on Steam March 17th 2015!

http://store.steampowered.com/app/339200/

Oceanhorn's Steam version will be a completely remastered version of the game, a huge graphical and technical overhaul, that makes the game more suitable for bigger screens and more powerful hardware.

Expanding to new horizons

The game has been redesigned for physical controls and it plays great whether you wish to play it with mouse / keyboard or with a gamepad!

All of the game's graphical assets have been tweaked for the PC platform. We added four times more polygons, sharper textures, normal maps, detail objects and new lighting effects such as dynamic ambient occlusion, soft shadows and realtime reflections to make Oceanhorn look stunning when played in 4K resolution.


Ambient Occlusion makes the cave entrance look darker

We have also read every review and feedback out there to improve the original game all around. We have remastered puzzles and taken care of spots that felt confusing or unfair for the players.

There are some new items in the shop that will hit the nail with people's demands. Second Chance Potion might be expensive, but you will be lucky to have it in a boss fight and Mana Refill Potion will be a priceless possession in Frozen Palace. What could be the function of the Ancient Arcadian Radar, though?

Overall, we are pretty damn happy with the outcome of this definitive version of Oceanhorn and we are truly honored to welcome the Steam audience to enjoy the adventures at Uncharted Seas!

Unsuccessful laser turret attempting to fry that bacon

Thursday, March 28, 2019

Assault Android Cactus+ Blasts Off Today On Nintendo Switch



Assault Android Cactus+, the action-packed arcade-style twin-stick shooter by Witch Beam, launches into uncharted space on the Nintendo Switch today.

Assault Android Cactus+ debuts on Nintendo Switch with all-new features: Campaign+, new character costumes and aim assist options. Campaign+ reconstructs the original campaign with new enemy waves, more dynamic elements, and amped-up boss fights, all at 60 frames-per-second. Unlock a new costume for each of the nine androids while testing their unique loadouts to see what's most effective across 25 levels.




While Campaign+ and its new leaderboards are enticing additions for those familiar with the game, anyone can enjoy Cactus+'s frantic firefights with the addition of Aim Assist options. To let players get the most out of the local co-op experience, the game can be played with dual Joy-Con, Pro Controller and even single Joy-Con in any combination.
Recruit up to three friends and start shooting!

Lead Junior Constable Cactus and her Android friends as they respond to a distress call and finds a derelict space freighter under attack by its own robot workers. Keep the androids' batteries charged by embracing aggressive play and blasting hordes in frantic, 60 frames-per-second firefights.




Keep the entertainment going with Daily Drive, which offers one shot a day at setting a worldwide high score in a newly-generated level. Players craving further challenge will find it in Boss Rush and Infinity Drive. Earn credits to enable amusing EX options including first-person mode, visual filters, and the newly re-balanced MEGA Weapons. A new Movie Gallery joins unlockables like Developer Commentary, Jukebox, and Sound Test, making revisiting favorite moments easier than ever.

"Assault Android Cactus+ has something for everyone," says Tim Dawson, director, Witch Beam. "We hope the fans that have supported us over the years will enjoy Campaign+, and we look forward to first-timers feeling confident with aim assist."




Assault Android Cactus+ is available on Nintendo Switch for $19.99. The game supports English, French, Italian, German, Spanish and Japanese languages.








Wolfenstein: Youngblood Out This Summer - Eurogamer

Wolfenstein: Youngblood out this summer

March 2017

Another post I guess?

I haven't put as much time as I usually do into my posts, but right now I just wanted to post some more information.

March:

My stream for March is going to be all about world records! Every stream I do will be some world record attempt (high score, speedrun, etc). My main concentration is going to be reclaiming the Super Mario Bros warpless speedrun world record. I am going to be spending the majority of March on that project. However, March also has 5 WedNESdays! Which means 5 streams of world record attempts on 5 NES games I haven't touched yet this year.



Wednesday, March 27, 2019

Choices, Consequences And The Ability To Plan

This article goes over why it is so important for choices to matter in a game and how it all has to do with planning. If a user perceives that their actions have no consequences, you remove a core component of engagement - the ability to plan.



Say you are playing a game like The Walking Dead, or any other interactive movie, and you are faced with the choice whether or not to help someone who is hurt. You decide that you want to help the person, after which you never see them again for the rest of the game. Reloading a save and playing through the scenario you find out that if you chose not to help, the same thing plays out. Simply put: in this case, your choice really has no consequences.

While the scenario is made up, it presents a very typical situation that opinions are heavily divided on. Some people are totally okay with it for various reasons. But others will argue that this lack of consequences ruins the entire experience, as your choices doesn't really matter. It's really easy to say that people who feel this way are simply playing the game the wrong way or are not properly immersed. However, I think it's really important to investigate this reaction further as it gets us closer to some fundamental problems of narrative games.

The argument from people who get annoyed by these non-choices goes something like this: if every branch leads back to the same path, then you really don't have any say in how the game plays out. You are not playing a game, you are only pretending that you are. It's like when you are playing a split-screen game and notice you've been watching the wrong side. The feeling of play is just an illusion. Nobody would tolerate a Super Mario where a pre-written script - not the player's skill - determines whether or not they survive a jump, so why tolerate games where all choices lead to the same conclusion?

One could counter that by saying the intention is to put you into a hard position and the game is about your varied emotional reactions as you ponder the different choices. It isn't about affecting how the game plays out - it is about making an emotional journey. If you require the game to show you the consequences of your actions, you are not immersed in the game's story - you are simply trying to optimize a system. This might sometimes be the case, but I also think this line of thinking is missing what the actual problem is: the failure of the player's mental model.

---

Let's start by breaking down the problem. A mental model, as explained in this previous post, is how the player perceives the game's world and their role in it. As you are playing a game, you slowly build a mental model of the various objects and systems that make up the game and attach various attributes to them. At first a box might just be a piece of the background, but as you learn you can destroy it in order to gain items, attributes are added. The object gains complexity. The reverse can also happen. For instance, when you first see a character you might think that you are able to speak to it and therefore label it with various attributes you know that humans usually have. But when you find out that the character is really just a piece of the background without any sort of agency, most of those attributes are lost.

Your mental model of a game is something that is continually revised as you are playing, and it is something that always happens, no matter what the game is. In fact, this is a process that is a core part of any medium, including books and films. So, obviously, when you are playing an interactive movie game, you are not simply reacting to a direct stream of information. You are answering questions based on your mental model.

Take my "will you help your hurt companion?" scenario from above. The knowledge you take into account about that choice is not just what is currently projected at you from the TV screen. It is a combination of everything you have gone through up to this point, along with a bunch of personal knowledge and biases. Even basic concepts like "hurt" and "companion" aren't just created in this moment. They are ideas that the game has spent a lot of time building up, be that for good or bad, from the very moment you started playing.

When you are faced with the hypothetical scene of  a hurt companion, you are not just dealing with an animated image on a screen. You are dealing with a whole world constructed in your mind. This is what your choice will be based around. While it might objectively seem that everyone is reacting to the same scenario, they may in fact be dealing with quite different setups.

So when someone gets annoyed by the lack of consequences, it is not necessarily the direct consequences that are missing. The issue is that they have constructed a mental around a real person in need, along with that person's future actions. So when it becomes apparent that the game doesn't simulate that as part of its own model, the player's mental model is broken and it feels like a big let down. Remember that we don't play the game that is on the screen, we the play game as we perceive it in our heads. So when it turns out that your imagined world is fake, it has a huge impact.

It gets even worse once we take into the fact that planning is fundamental to a sense of gameplay. As explained in a previous post, engaging gameplay is largely fueled by the ability to make plans. The way this works is that the player first simulates a course of action using their mental model, and then tries to execute that in the game. This is a continuous process and "planning and executing the plan" is basically the same as playing. Interactive movies normally don't have a lot of gameplay and it is really only in the choice moments that the player gets to take part in any actual play. Hence, when the choices turn out to have no consequences, it becomes clear that planning is impossible. In turn, this means that any meaningful play is impossible and the experience feels fundamentally broken.

As an example, take this experience I had with Heavy Rain:
[...] one scene I had made a plan of actions: to first bandage an unconscious person and then to poke around in his stuff. There really was nothing hindering me from doing so but instead the game removed my ability to interact directly after caring for the person. The game interpreted me wanting to help the guy as I also did not want to poke around, thinking that they two were mutually exclusive actions. Of course I thought otherwise and considered it no problem at all to do some poking afterward.
I think that people to complain the loudest about the lack of consequences are extra sensitive to situations like this. But, as I said, this is not due to lack of consequences per se, but due to the impact it has on the consistency of their mental model and sense of play. It is really important to note that this is not due to some sort of lack in immersion or ability to roleplay. On the contrary, as I have described above, many of the issues arise because they mentally simulate the game's world and characters very vividly.


---

So the problem that we are faced with is really not a lack of consequences. It is because the underlying systems of the game are not able to simulate the mental model for a subset of players. One way of mending this is of course to add more consequences, but that is not a sustainable solution. Additional branches increase exponentially, and it quickly becomes impossible to cover every single possible outcome. Instead it is much better to focus on crafting more robust mental models. Sure, this might entail adding consequences to choices, but that is just a possible solution - it is not the end goal.

As I outlined in the previous blog on the SSM framework it is incredibly important to keep track of how systems and story help form a mental model in the player's mind. For instance, if you start your game saying "your actions will have consequences", that will immediately start filling up your player's imagination with all sort of ideas and concepts. Even how pre-release PR is presented can affect this. All of these then become things that lay groundwork for how the game is modeled in the player's head and it is vitally important to make sure this mental model remains stable over the course of the game.

One of the main things to have in mind is consistency. Remember that as someone is playing a game, they are building up a mental simulation for how things are supposed to work. If you provide information that certain events are possible when they are in fact not, you are running the risk of breaking the player's mental model. You either need to remove this sort of information or to make sure that they never take part in situations where these sort of events feel like a valid option.

However, the most important thing to keep in mind is the ability to plan. A major reason why the lack of consequences can feel so bad is because these consequences were part of the player's gameplay plans. So when it becomes apparent that they don't exist, the whole concept of play breaks down. In all fairness, this might be OK for certain genres. If the goal is to simply to make an interactive movie, then losing a subset of player might be fair. But if the goal is to make proper interactive storytelling, then this is of paramount importance - planning must be part of the core experience.

That doesn't mean that every choice is something the player needs to base their plans on. But in that case then there need to be other things that lie on a similar time scale and which are possible to predict and incorporate into plans. I think that one way around this problem is to have a more system-focused feature that runs alongside the more fuzzy narrative choices. When the players make choices, their mental model will have the best predictive skills around this more abstract system, and play revolves mostly around this. Then when more narrative choices are presented they will feel more game-like and part of the a solid simulation, despite not really having any consequences.

A simple and good example is the choices you have to make in Papers, Please. This game is driven by a type of survival simulation where you need to gain credits (though doing proper passport check) in order to keep your family live. Entwined into this are choices about who you will allow into the country. Many of these don't have any far reaching consequences, but that that doesn't really matter because your ability to plan is still satisfied. But despite that, these choices still feel interesting and can have an emotional effect.

 This sort of approach relies on combining several elements in order to produce the feeling of something that might not actually be there. This is something that is used in a wide range of applications, from how we view images on a TV, to how films can create drama through cuts. We don't always have to have solve problems straight on, but often the best way is to split the problem into many and to solve each problem on its own. The combined effect will then seem like a solution to the original problem. This is a technique that is super important for not just this, but many other narrative problems. I will write a blog post later on that goes into more details.

Once you have a game that is consistent and that has some sort of planning apart from the more narrative choices, the probability of satisfying the people will be greatly improved. And not only that, your narrative experience will improve over all, for all players, not just a subset. In this case I think it is fair to view these extra sensitive people as canaries in a cave, something that is first to react on a much bigger issue.

---

This blog post by no means presents the solution to end all problems with choices and consequences. But hopefully it will give a new way of thinking about the problem and some basic directions for finding a solution. I don't think we will ever find a perfect way of dealing with choices, but the better informed we are at underlying causes, the better experiences we can provide.



Tuesday, March 26, 2019

IEEE Transactions On Games, Your New Favorite Journal For Games Research

At the start of 2018, I will officially become the Editor-in-Chief of the IEEE Transactions on Games (ToG). What is this, a new journal? Not quite: it is the continuation of the IEEE Transactions on Computational Intelligence and AI in Games (TCIAIG, which has been around since 2009), but with a shorter name and much wider scope.

This means that I will have the honor of taking over from Simon Lucas, who created TCIAIG and served as its inaugural Editor-in-Chief, and Graham Kendall, who took over from Simon. Under their leadership, TCIAIG has become the most prestigious journal for publishing work on artificial intelligence and games.

However, there is plenty of interesting work on games, with games or using games, which is not in artificial intelligence. Wouldn't it be great if we had a top-quality journal, especially one with the prestige of an IEEE Transactions, where such research could be published? This is exactly the thought behind the transformed journal. The scope of the new Transactions on Games simply reads:

The IEEE Transactions On Games publishes original high-quality articles covering scientific, technical, and engineering aspects of games.


This means that research on artificial intelligence for games, and games for artificial intelligence, is very welcome, just as it is was in TCIAIG. But ToG will also be accepting papers on human-computer interaction, graphics, educational and serious games, software engineering in games, virtual and augmented reality, and other topics.The scope specifically indicates "scientific, technical engineering aspects of games", and I expect that the vast majority of what is published will be empirial and/or quantitative in nature. In other words, game studies work belonging primarily in the humanities will be outside the scope of the new journal. The same goes for work that has nothing to do with games, for example, game theory applied to non-game domains. (While there is some excellent work on game theory applied to games, much game theory research has nothing to do with games that anyone would play.) Of course, acceptance/rejection decisions will be taken based on the recommendations of Associate Editors, who act on the recommendations of reviewers, leaving some room for interpretation of the exact boundaries of what type of research the journal will publish.

Already before I take over as Editor-in-Chief, I am working together with Graham to refresh the editorial board of the journal. I expect to keep many of the existing TCIAIG associate editors, but will need to replace some, and in particular add more associate editors with knowledge of the new topics where the journal will publish papers, and visibility in those research communities. I will also be working on reaching out to these research communities in various ways, to encourage researchers there to submit their best work the IEEE Transactions on Games.

Given that I will still be teaching, researching and leading a research group at NYU, I will need to cut down on some other obligations to free up time and energy for the journal. As a result, I will be very restrictive when it comes to accepting reviewing tasks and conference committee memberships in the near- to mid-term future. So if I turn down your review request, don't take it personally.

Needless to say, I am very excited about taking on this responsibility and work on making ToG the journal of choice for anyone doing technical, engineering or scientific research related to games.

Five Nights At Freddy's


The Short

Pros
- This is technically a spiritual successor of Night Trap. Just...think about that for a moment
- Evokes a certain sense of uneasiness throughout, which then becomes genuine stress
- Does well at using it's limited controls to make you feel powerless, increasing the spooks
- Animatronic anything just gives me the jibbilies
- The jump scares are surprisingly decent
- Despite it's simple graphics, the ghetto feel actually works to the game's creepy benefit
- It's short length is a plus; it doesn't wear out it's welcome too badly

Cons
- Game usually ends up relying on jump scares after the first or second night
- Doesn't do as much as one would hope to mix up the formula
- Gameplay mechanics themselves are fairly simplistic
- Why would the guy come back after the first night?
- Seems a little too much "Made for YouTubers"

Nothing seems wrong, everything is fine. 
The Long

What a weird gaming world we live in. With the rise of YouTube Let's Players like PewDiePie, Game Grumps, and others, indie devs now have an outlet to reach millions of people should they be lucky enough to be chosen by one of these crazy game-playing behemoths. Whether you love them, hate them, or find them obnoxious and wonder what the big deal is, it's pretty certain the face of gaming exposure has changed, for better or for worse.

One of the things that made these Let's Players so prominent early on was their reactions to horror games. Played in the dark, eyes wide and headests on, people apparently got a kick out of watching other gamers totally freak out on camera in ways that were absolutely not made up or overexaggurated in any way. Games like Amnesia and Slender went from niche horror titles to cultural megahits, and other games that were easy for these YouTubers to react to (Surgeon Simulator, Happy Wheels, Flappy Bird) began to emerge to embrace this new market.

Now we have the latest in this low-budget, high on jump scares endeavor, Five Nights at Freddy's. While I don't want to say the developer went out of his way to make a game that YouTubers would play and promote (which is exactly what happened with this game), I will say that there wouldn't even be a market for this kind of game if the floodgates hadn't already been opened. Because, you see, this game is basically the Sega CD disasterpiece, Night Trap. Yeah. Really. That alone makes me want to love it.

But is it actually a good game? A bad game? And, more importantly, is it 2spooky4me? Well, fill out your job applications and stay away from Chuckie Cheese, because we're gonna find out.

This seems fine. Everything here is fine. 
The plot of the game is relatively simple. Starved for work, you take a five-day gig at Freddy's, a sort of Chuckie Cheese style pizza place for kids. The job seems simple enough: sit in a room as a night security guard from the hours of ten to six, and at the end of the week collect your $120. It's apparently in like 1987 or something because the cameras are all garbage and everything looks...well, like it's from 1987, which explains the $120 being actually worth it for this job. Only not really.

The first night out you get a phonecall that goes straight to your voicemail from someone who claims to have been the previous person working there. Apparently at night the animatronic creatures (there are four total, one being the titular "Freddy") are allowed to wander around on their own. No biggie, but if they manage to find you their screwed up programming will read you as an exoskeleton missing it's suit, and then stuff you into one of the spares they have lying around. Which wouldn't be a problem except bones and vital organs don't really mesh will with the complex machinery inside these things. So basically they'll kill you. What kind of freaking job is this?

THINGS ARE VERY QUICKLY BECOMING LESS FINE.
The way to prevent yourself from being brutally murdered by these straight up freakin' creepy animatronics is to either keep an eye on them (as they are aware when you are watching them over the cameras) and, in case of an emergency, close one of two doors leading to your office. The big trick is that doing anything (even just sitting with the light on) drains your very limited supply of power. Pulling up the cameras, switching cameras, and even having the light that flickers on and off in the dark hallway outside the open door drains the power just a bit. Closing the doors in particular is a massive power hog, which means you have to keep them open as long as possible unless you want to be out of power at 5 AM and at the mercy of these things.

Like I said, Night Trap. You can't leave the office; the only controls you have are deciding when to look through the cameras (which have pretty crappy, usually black-and-white picture, and one room doesn't even have visual just sound) and when to shut the doors. As part of the trick, there are blind spots between the rooms closest to you and your actual room, which means you'd best be putting the camera interface away (which takes up the full screen) and checking the space just outside your doors (which is just a flickering light in the pitch darkness) unless some freaking robot duck sneaks in when you aren't looking and turns you into a mobile Mickey Mouse.

I REGRET EVERYTHING!
The various characters have a variety of nuances that you have to learn quickly. Most will only move when you aren't looking, but there isn't enough power to keep an eye on them at all times. One just sort of wanders, heading for you then giving up and moving around a bit. The other can teleport (yeah, not fair), though the game gives a faint audio cue when it's about to happen. Freddy...I don't know what he does, just kind of lumbers about and makes me upset. The worst is the creepy wolf guy (see above) who is normally hiding behind a stage curtain. But if you don't look at the curtain (or, inversely, look at it too much) he'll suddenly burst free and make a beeline straight for you (the only character you can see moving on camera). He'll pound on the door for a while (or murder you, again, see above) before retreating and the process cycles again. Having it trigger on both "too much" and "too little" was a clever idea, meaning you are constantly stressed out.

And hoo boy, this game is super stressful. The limited vision, the constant worrying about power, the characters that move erratically and then stand perfectly still when caught on camera (or in your field of vision, standing outside the door before they come in to get you) all compiles to a massive, stressful bundle of fun. With so much stuff to manage and the constant fear of getting jumped or missing someone, the game thrives on making things miserable. The worst is having to, on occasion, switch off the cameras because you know you're draining too much power, meaning you are sitting there for one, two, three painful seconds wondering if they're coming for you. And when you do run out of power? Well, they don't come straight for you, but you bet when you hear that little musical jingle, Freddy is coming. He's coming to getcha.

You stay there, rabbit. No tricks! Tricks are for kids!

So the big question is this: is this game actually, genuinely scary? I'll preface my answer with the usual that comes with spooky stuff: your mileage may vary. The game doesn't rely on blood or gore to provide it's scares, and I commend it for it. The creepy atmosphere, voicemails, and dead silence save the hum of your fan and the click of switching cameras are more than enough to unsettle. However, after a while (usually around day three), it stopped being scary and was just stressful (but in a good way).

The main reason for this is twofold. The first is the game is extremely reliant on jump scares. Now, it's jump scares are actually pretty dang good, especially when you are playing the game the first time. There's a massive beginner's trap on day two that I won't spoil, but needless to say if you don't listen to the voice mail very intently, you are gonna have a bad time. Nothing is worse than the long pause when the power is out and you see Freddy's eyes glowing, and it's 5 AM which means it just might, might roll over and you'll win. The screen grows dark and...

Well, either a massive scream of horror as Freddy encompasses the screen taking you, or the 6 AM rolls over. Either way, you're gonna jump.

You really are a prima donna, aren't you?
There are other jump scare tricks. The fox, as mentioned, can make it from his stage to you in just a few seconds, often resulting in a frantic button press for the door (or a screaming yell as he bursts in to murder you). Other characters can sneak into the room while you are looking at the cameras, and the game is evil in that they will wait for you to lower the interface at your own discretion before attacking. It's a clever trick that makes you scared to to just about anything.

But at it's core, all these things are just what I said before: jump scares. That's the main crux. They are very obviously jump scares too, because the sounds they make are horrifically loud compared to the rest of the game and sound worse than Nazgul screams. Like the rest of the game, they evoke overwhelming stress, and usually cause a pretty good jump and a cuss word.

The issue is that eventually jump scares get old. After the fifth or sixth time of getting jumped, getting caught off guard is less frightening and more just stressful and annoying. Once you learn the audio cues that they're in the room when your camera is up, you can expect the "scare" before raising the camera. Is it still startling to have a sudden burst of noise pierce the silence? Yeah, but that's startling, not scary.

As a self-proclaimed expert on ducks, I can say with a 98% certainty that they don't have teeth.
The second issue is the lack of variety. The game basically plays all it's cards on the third day: you are introduced to all the characters, and you can start learning their patterns. Beyond that it's just the difficulty ramping up: the AI gets smarter, and...well, that's it actually. There isn't really any dramatic changes to formula. It would have been cool if after a few days they started cutting the wires on certain cameras, or using decoys to distract you. Maybe mix up their movement patterns a bit, or introduce a few more animatronics into the mix (or even put you in a different building). The game's brevity is it's strength in this regard (only five days and a bonus, extra hard sixth day if you hate yourself) seeing as I'd say it still maintains it's spooky atmosphere up until the end of day three, but then it stops being really scary and more of a game you are playing, and in that regard it's about as exciting as playing Night Trap. Which is not very exciting at all.

So, in a way, this game is a perfect fit for this generation of "horror" fans. People who like quick jumps and rapid fire scares, that aren't really satisfied with slow burns unless they quickly result in ramped up jump scares (or gore splatter). It's less "horror" and more "thriller" (or "suspenseful"), as the slow burn followed by the sudden, rapid release, then followed by the slow burn again is pretty much a staple of how to create smart tension within horror games (see P.T. or Silent Hill 2 for good examples of this). But I will commend it in that it starts as a spooky slow burn: the first two days are genuinely unsettling, even if you do manage to not get jump scared. It's just too bad it couldn't keep that momentum going.

Nothing to see here. That's good, right? Please? I want my mommie...
Graphically the game does wonders with it's obvious low budget. The entire game is pretty much a still shot with a camera fuzzy VHS filter put over it, with random layers of when the beasties are there put on top of it to spook you out. What really works is their details in lighting; they're really good at taking a dark scene (like the one above) and then slyly sneaking a dark shadow that wasn't there before, or some glowing eyes peeking out of a corner. Some are more obvious (as you can see from the screenshots earlier in the review), but considering this isn't even as complex as, say, the FMV filled Night Trap, I have to hand it to them: they did a whole lot with next to nothing.

The only real animations are when you are assaulted and the freaking nightmare-fuel running fox (who runs like Crash Bandacoot, which if you think of that makes him way less scary). Their animations are really janky and the models fairly low-budget up close, but it fits both the theme of the animatronics and the fact that this game looks like a game from the late 90s/early 2000s. It has that "3D is just getting started on PCs so we're gonna pre-render everything" look like from the Fallout games, and I actually kind of love it for it. It's a throwback to a graphical style that nobody ever throws back to (thank goodness they didn't make this game with pixel art), and one I have a lot of nostalgia for. I'd like to see it done more (famous last words here...).

Crash Bandacoot is coming for your babies. 

Not gonna lie: I went into this game fully expecting to be an elitest reviewing jerkbag and give it a low score because I thought it was just pandering YouTube bait. Hey, at least I'm being honest here.

But after playing through it, my opinion changed to one of genuine reverence to this developer. Is this game scary like a Hitchcock movie or other classic horror games? Maybe at the beginning, but not really. The gameplay is intentionally overly difficult (what is this place powered on, double-a's?) as well as extremely simplistic, the graphics are mostly static images, and it's riddled with jump scares. But despite all that, they managed to create a fun, genuinely unsettling horror game, that takes a relatively untapped formula and uses it very effectively to do exactly what it sets out to do. 

It isn't too long, and doesn't really outstay it's welcome (any longer and I'd have griped about how the gameplay gets stressfully tedious). It's scares early on are genuine and downright unnerving, and the style is one I really enjoyed.

But perhaps the biggest catch? It released at only $5. That's insane. Truly, I find it hard to believe. 

Ok, I'm done, that's enough heart attacks for today. 

Will I play this game again? Probably not. Not because it spooked me (which it did, at least at first), but mostly because this game is so damned stressful I can feel myself losing my hair. But will I recommend it to a friend so they'll have a hellish 2-3 hours for the low low price of $5? Abso-freaking-loutly. 

Nice work, Scott Cawthorn. Eat up all that free YouTube marketing. You totally deserve it.

Four out of five nights at Freddy's. 


Now excuse me while I never, ever play this damn game again. 

Empiricism And The Limits Of Gradient Descent

This post is actually about artificial intelligence, and argues a position that many AI researchers will disagree with. Specifically, it argues that the method underlying most of deep learning has severe limitations which another, much less popular method can overcome. But let's start with talking about epistemology, the branch of philosophy which is concerned with how we know things. Then we'll get back to AI.

Be warned: this post contains serious simplifications of complex philosophical concepts and arguments. If you are a philosopher, please do not kill me for this. Even if you are not a philosopher, just hear me out, OK?

In the empiricist tradition in epistemology, we get knowledge from the senses. In the 17th century, John Locke postulated that the mind is like a blank slate, and the only way which we can get knowledge is through sense impressions: these impressions figuratively write our experience onto this blank slate. In other words, what we perceive through our eyes, ears and other sense organs causes knowledge to be formed and accumulated within us.

The empiricist tradition of thought has been very influential for the last few centuries, and philosophers such as Hume, Mill and Berkeley contributed to the development of empiricist epistemology. These thinkers shared the conviction that knowledge comes to us through experiencing the world outside of us through our sense. They differed in what they thought we can directly experience - for example, Hume though we can not experience causality directly, only sequences of world-states - and exactly how the sense impressions create knowledge, but they agree that the sense impressions are what creates knowledge.

In the 20th century, many philosophers wanted to explain how the (natural) sciences could be so successful, and what set the scientific mode of acquiring knowledge apart from superstition. Many of them were empiricists. In particular, the Vienna Circle, a group of philosophers, mathematicians, and physicists inspired by the early work of Wittgenstein, articulated a philosophy that came to be known as Logical Empiricism. The basic idea is that sense impressions is all there is, and that all meaningful statements are complex expressions that can be analyzed down to their constituent statements about sense impressions. We gain knowledge through a process known as induction, where we generalize from our sense impressions. For example, after seeing a number of swans that are white you can induce that swans are white.

A philosopher that was peripheral to the Vienna Circle but later became a major figure in epistemology in his own right was Karl Popper. Popper shared the logical empiricists' zeal for explaining how scientific knowledge was produced, but differed radically in where he thought knowledge came from. According to Popper, facts do not come from sense impressions. Instead, they come "from within": we formulate hypotheses, meaning educated guesses, about the world. These hypotheses are then tested against our sense impressions. So, if we hypothesize that swans are white, we can then check this with what our eyes tell us. Importantly, we should try to falsify our hypotheses, not to verify them. If the hypothesis is that swans are white, we should go looking for black swans, because finding one would falsify our hypothesis. This can be easily motivated with that if we already think swans are white, we're not getting much new information by seeing lots of white swans, but seeing a black swan (or trying hard but failing to find a black swan) would give us more new information.

Popper called his school of thought "critical rationalism". This connects to the long tradition of rationalist epistemology, which just like empiricist epistemology has been around for most of the history of philosophy.  For example, Descartes' "I think, therefore I am" is a prime example of knowledge which does not originate in the senses.

Among (natural) scientists with a philosophical bent, Popper is extremely popular. Few modern scientists would describe themselves as logical empiricists, but many would describe themselves as critical rationalists. The main reason for this is that Popper describes ways of successfully creating scientific knowledge, and the logical empiricists do not. To start with the simple case, if you want to arrive at the truth about the color of swans, induction is never going to get you there. You can look at 999999 white swans and conclude that they are all white, but the millionth may be black. So there can be no certainty. With Popper's hypothetico-deductive method you'd make a hypothesis about the whiteness of swans, and then go out and actively try to find non-white swans. There's never any claim of certainty, just of an hypothesis having survived many tests.

More importantly, though, the logical empiricist story suffers from the problem that more complex facts are simply not in the data. F=ma and E=mc2 are not in the data. However many times you measure forces, masses and accelerations of things, the idea that the force equals mass times acceleration is not going to simply present itself. The theories that are at the core of our knowledge cannot be discovered in the data. They have to be invented, and then tested against the data. And this is not confined to large, world-changing theories.

If I already have the concepts of swan, white and black at the ready, I can use induction to arrive at the idea that all swans are white. But first I need to invent these concepts. I need to decide that there is such a thing as a swan. Inductivists such as Hume would argue that this could happen through observing that "a bundle of sense impressions" tend to co-occur whenever we see a swan. But a concept such a swan is actually a theory: that the animal is the same whether it's walking of flying, that it doesn't radically change its shape or color, and so on. This theory needs to somehow be invented, and then tested against observation.

In other words, empiricism is at best a very partial account of how we get knowledge. On its own, it can't explain how we arrive at complex concepts or theories, and it does not deliver certainty. Perhaps most importantly, the way we humans actually do science (and other kinds of advanced knowledge production) is much more like critical rationalism than like empiricism. We come up with theories, and we work to confirm of falsify them. Few scientists just sit around and observe all day.

Enough about epistemology for now. I promised you I would talk about artificial intelligence, and now I will.

Underlying most work in neural networks and deep learning (the two terms are currently more or less synonymous) is the idea of stochastic gradient descent, in particular as implemented in the backpropagation algorithm. The basic idea is that you can learn to map inputs to outputs through feeding the inputs to the network, seeing what comes out at the other hand, and compare it with the correct answer. You then adjust all the connection weights in the neural network so as to bring the output closer to the correct output. This process, which has to be done over and over again, can be seen as descending the error gradient, thus the name gradient descent. You can also think of this as the reward signal pushing around the model, repelling it whenever it does something bad.

(How do you know the correct output? In supervised learning, you have a training set with lots of inputs (e.g. pictures of faces) and corresponding outputs (e.g. the names of the people in the pictures). In reinforcement learning it is more complex, as the input is what an agent sees of the world, and the "correct" output is typically some combination of the actual reward the agent gets and the model's own estimate of the reward.)

Another type of learning algorithm that can be used for both supervised learning and reinforcement learning (and many other things as well) is evolutionary algorithms. This is a family of algorithms based on mimicking Darwinian evolution by natural selection; algorithms in this family include evolution strategies and genetic algorithms. When using evolution to train a neural net, you keep a population of different neural nets and test them on whatever task they are supposed to perform, such as recognizing faces or playing a game. Every generation, you throw out the worst-performing nets, and replace them with "offspring" of the better-performing neural nets; essentially, you make copies and combinations of the better nets and apply small perturbations ("mutations") to them. Eventually, these networks learn to perform their tasks well.

So we have two types of algorithms that can both be used for performing both supervised learning and reinforcement learning (among other things). How do they measure up?

To begin with, some people wonder how evolutionary algorithms could work at all. It is perhaps important to point out here that evolutionary algorithms are not random search. While randomness is used to create new individuals (models) from old ones, fitness-based selection is necessary for these algorithms to work. Even a simple evolution strategy, which can be implemented in ten or so lines of code, can solve many problems well. Additionally, decades of development of the core idea of evolution as a learning and search strategy has resulted in many more sophisticated algorithms, including algorithms that base the generation of new models on adaptive models of the search space, algorithms that handle multiple objectives, and algorithms that find diverse sets of solutions.

Gradient descent is currently much more popular than evolution in the machine learning community. In fact, many machine learning researchers do not even take evolutionary algorithms seriously. The main reason for this is probably the widespread belief that evolutionary algorithms are very inefficient compared to gradient descent. This is because evolutionary algorithms seem to make use of less information than gradient descent does. Instead of incorporating feedback every time a reward is found in a reinforcement learning problem, in a typical evolutionary algorithm only the end result of an episode is taken into. For example, when learning to play Super Mario Bros, you could easily tell a gradient descent-based algorithm (such as Q-learning) to update every time Mario picks up a coin or gets hurt, whereas with an evolutionary algorithm you would usually just look at how far Mario got along the level and use that as feedback.

Another way in which evolution uses less information than gradient descent is that the changes to the network are not necessarily done so as to minimize the error, or in general to make the network as good as possible. Instead, the changes are generally completely random. This strikes many as terribly wasteful. If you have a gradient, why not use it?

(Additionally, some people seem to dislike evolutionary computation because it is too simple and mathematically uninteresting. It is true that you can't prove many useful theorems about evolutionary algorithms. But come on, that's not a serious argument against evolutionary algorithms, more like a prejudice.)

So is the idea that evolutionary algorithms learn less efficiently than gradient descent supported by empirical evidence? Yes and maybe. There is no question that the most impressive results coming out of deep learning research are all built on gradient descent. And for supervised learning, I have not seen any evidence that evolution achieves anything like the same sample-efficiency as gradient descent. In reinforcement learning, most of the high-profile results rely on gradient descent, but they also rely on enormous computational resources. For some reinforcement learning problems which can be solved with small networks, evolutionary algorithms perform much better than any gradient descent-based methods. They also perform surprisingly well on playing Atari games from high-dimensional visual input (which requires large, deep networks) and are the state of the art on the MuJoCo simulated robot control task.

Does evolutionary algorithms have any advantage over gradient descent? Yes. To begin with, you can use them even in cases where you cannot calculate a gradient, i.e. your error function is not differentiable. You cannot directly learn program code or graph structures with gradient descent (though there are indirect ways of doing it) but that's easy for evolutionary algorithms. However, that's not the angle I wanted to take here. Instead I wanted to reconnect to the discussion of epistemology this post started with.

Here's my claim: learning by gradient descent is an implementation of empiricist induction, whereas evolutionary computation is much closer to the hypothetico-deductive process of Popper's critical rationalism. Therefore, learning by gradient descent suffers from the same kind of limitations as the empiricist view of knowledge acquisition does, and there are things that evolutionary computation can learn but gradient descent probably cannot.

How are those philosophical concepts similar to these algorithms? In gradient descent, you are performing frequent updates in the direction that minimizes error. The error signal can be seen as causal: when there is an error, that error causes the model to change in a particular way. This is the same process as when a new observation causes a change in a person's belief ("writing our experience on the blank slate of the mind"), within the empiricist model of induction. These updates are frequent, making sure that every signal has a distinct impression on the model (batch learning is often used with gradient descent, but generally seen as a necessary evil). In contrast, in evolutionary computation, the change in the model is not directly caused by the error signal. The change is stochastic, not directly dependent on the error and not in general in the direction that minimizes the error, and in general much less common. Thus the model can be seen as a hypothesis, which is tested through applying the fitness function. Models are generated not from the data, but from previous hypotheses and random changes; they are evaluated by testing their consequences using the fitness function. If they are good, they stay in the population and more hypotheses are generated from them; if they are bad, they die.

How about explicitly trying to falsify the hypothesis? This is a key part of the Popperian mode of knowledge acquisition, but it does not seem to be part of evolutionary computation per se. However, it is part of competitive coevolution. In competitive coevolution, two or more populations are kept, and the fitness of the individuals in one population are dependent on how well they are perform against individuals in the other population. For example, one population could contain predators and the other prey, or one could contain image generators and the other image recognizers. As far as I know, the first successful example of competitive coevolution was demonstrated in 1990; the core idea was later re-invented (though with gradient descent instead of evolutionary search) in 2014 as generative adversarial networks.

If you accept the idea that learning by gradient descent is fundamentally a form of induction as described by empiricists, and that evolutionary computation is fundamentally more like the hypothetico-deductive process of Popperian critical rationalism, where does this take us? Does it say anything about what these types of algorithms can and cannot do?

I believe so. I think that certain things are extremely unlikely to ever be learned by gradient descent. To take an obvious example, I have a hard time seeing gradient descent ever learning F=ma or E=mc2. It's just not in the data - it has to be invented. And before you reply that you have a hard time seeing how evolution could learn such a complex law, note that using evolutionary computation to discover natural laws of a similar complexity has been demonstrated almost a decade ago. In this case, the representation (mathematical expressions represented as trees) is distinctly non-differentiable, so could not even in principle be learned through gradient descent. I also think that evolutionary algorithms, working by fewer and bolder strokes rather than a million tiny steps, is more likely to learn all kinds of abstract concepts. Perhaps the area where this is likely to be most important is reinforcement learning, where allowing the reward to push the model around seems to not be a very good idea in general and testing and discarding complete strategies may be much better.

So what should we do? Combine multiple types of learning of course! There are already hundreds (or perhaps thousands) of researchers working on evolutionary computation, but for historical reasons the evolutionary computation community is rather dissociated from the community of researchers working on machine learning by gradient descent. Crossover between evolutionary learning and gradient descent yielded important insights three decades ago, and I think there is so much more to learn. When you think about it, our own intelligence is a combination of evolutionary learning and lifetime learning, and it makes sense to build artificial intelligence on similar principles.

I am not saying gradient descent is a dead end nor that it will necessarily be superseded. Backpropagation is obviously a tremendously useful algorithm and gradient descent a very powerful idea. I am also not saying that evolutionary algorithms are the best solution for everything - they very clearly are not (though some have suggested that they are the second best solution for everything). But I am saying that backpropagation is by necessity only part of the solution to the problem of creating learning machines, as it is fundamentally limited to performing induction, which is not how real discoveries are made.

Some more reading: Kenneth Stanley has though a lot about the advantages of evolution in learning, and he and his team has written some very insightful things about this. Several large AI labs have teams working on evolutionary deep learning, including Uber AI, Sentient Technologies, DeepMind, and OpenAI. Gary Marcus has recently discussed the virtues of "innateness" (learning on evolutionary timescales) in machine learning. I have worked extensively with evolutionary computation in game contexts, such as for playing games and generating content for games. Nine years ago, me and a perhaps surprising set of authors set out to briefly characterize the differences between phylogenetic (evolutionary) and ontogenetic (gradient descent-based) reinforcement learning. I don't think we got to the core of the matter back then - this blog post summarizes a lot of what I was thinking but did not know how to express properly then. Thanks to several dead philosophers for helping me express my thoughts better. There's clearly more serious thinking to be done about this problem.

I'm thinking about turning this blog post into a proper paper at some point, so feedback of all kinds is welcome.