Thursday, January 2, 2020

Moto X4 on Google FI - Microphone Muffled / Bad - Possible Fix

I've been having people complain they cannot hear me or that I sound extremely muffled when I making phone calls on my Moto X4. I confirmed it was definitely the phone because I tested it with a wired headphones/microphone without issues.

Many forums suggest returning the device as a RMA. This really isn't an option for my nearly 2 year old phone which is out of warranty. One interesting comment I read was related to the position of the microphone holes -- one of which is very near the fingerprint reader. My phone is always in a case, so the rear microphone hole seemed clear but the front microphone hole seemed slightly dirty.

The phone is rated IP68 which allows for submersion in water for 1 hour up to 1 meter. Since my next option was to buy a new phone, I decided to put the IP68 rating to the test.

Of course, what I did next -- only do at your own risk. I am not responsible for anything you decide to do to your own phone.

With 100% rubbing alcohol, I cleaned out the microphone holes with a cotton swab and let dry. Using a recording app, the sounds was marginally better.   More drops of alcohol but this time I used an electric toothbrush to vibrate all the junk which was more than I expected (ick).

Voila, things sounded 100% better and I no longer need a new phone!


Moto X4 on Google FI - Enable RCS Messages

On my Moto X4 (Google FI), I couldn't enable RCS messages in the Messages application as it was stuck "verifying" my phone number. After a bunch of support forums, this is the procedure that ultimately worked for me.
  1. Turn on Airplane mode.
  2. Open the Settings -> Apps & notifications.
    1. If you don't see all your apps, first tap See all apps or App info.
    2. At the top right, tap More More and then Show system.
  3. Find the Carrier Services app.
    1. Tap Force stop and confirm.
    2. Tap Storage and then Clear data.
  4. Go back to the "App info" screen and find the Messages app.
    1. Tap Force stop and confirm
    2.  Tap Storage and then Clear data.
  5. Turn off Airplane mode.
  6. Open Messages and checked the settings, it should said "Connected - Chat features are ready for use".
I'm blogging this so I don't forget the process if this ever comes up in the future.

Thursday, December 12, 2019

Python Sort List of Dictionaries with Accented Characters

I needed to sort a list of dictionaries by a label key. It happens that the labels are in FR and normal sorting causes issues with ordering when accented characters are present. I happened to have PyUCA installed from PyPi.

Sunday, November 10, 2019

Custom 404 Page for Django CMS

For a side project, I needed a 404 page that was editable by users in Django CMS. Suffice it say, it took a while to figure out how to do it without having the Django cache the CMS page response and provide the right 404 http status code. Other techniques wrongly serve the page as a 200 OK which is wrong for search engines.
  1. First create a 404 page in Django CMS and publish it.
  2. Set you handler404 in your urls.py (change the path to your view file accordingly):
    handler404 = 'shared.views.page_not_found'
  3. The view:

    1:  from cms.views import details  
    2:  from django.http import HttpResponse  
    3:  def page_not_found(request, exception):  
    4:    response = details(request, '404')  # 404 is the slug you named your page in Django CMS
    5:    return HttpResponse(content=response.rendered_content, content_type='text/html; charset=utf-8', status=404)  
    

Friday, January 16, 2015

Gunicorn dyno death spiral on Heroku -- Part II

After a lot of investigation, we've figured out there is an issue with NewRelic, Postgres and Gunicorn. I summarized the issue here:

https://discussion.heroku.com/t/gunicorn-dyno-death-spiral/136/13

After discussing this with Graham Dumpleton over Twitter there is an issue with libpq. Below is a summary of a rather long Twitter discussion. Anything in quotes is from Graham but could have been paraphrased or reworded slightly to make sense here. Didn't want anyone to think that using Graham's words as my own...

The real issue is caused by "the use of the NewRelic agent in a background thread which does SSL HTTP calls -- surfaced issues with libpq SSL connections to database." This can be replicated by the use of a script someone wrote when the bug was reported to Postgres in October, 2014 (see link below). It's not the fault of the NR agent -- just that it uses a background thread and triggers the same behavior. That's why you can reproduce the issue without the NR agent.

So the "NR agent will create a background thread, but if you had other threads for other reasons which did SSL connections, [it] still occurs. If process is completely single threaded without multiple request handler threads, nor background threads doing SSL, [then it] is okay."

http://www.postgresql.org/message-id/CAHUL3dpWYFnUgdgo95OHYDQ4kugdnBKPTjq0mNbTuBhCMG4xvQ@mail.gmail.com

So in a perfect storm, libpq deadlocks which causes large issues for Gunicorn. The reason why is how Gunicorn is designed and that a "main thread is used to handle requests and if that deadlocks then signals aren't handled or if it uses a pipe of death, it will never return to accept on connection where it gets message to shutdown."

By default, "Django creates a new database connection per request which exacerbates the problem." So the problem can mostly be alleviated by using some sort of database connection pooling -- either what is built-in into Django 1.6+ or something like django-postgresql-pool or PGBouncer however it still can cause issues for Gunicorn as the db pool only reduces the likelihood of the problem. Also, because of the way the main thread works -- that is why I personally saw that the Gunicorn timeout directive have no affect on the problem because the worker was still waiting for Postgres and therefore was still alive despite the fact that Heroku killed the request on the client side.

The only real work around until libpq is fixed is to use something other than Gunicorn like Uwsgi and deal with the Harakiri requests OR don't mix DB calls when calls to HTTPS resources. In our case, we are using Amazon S3 so some requests need to make a bucket request to get information and sometimes cause this deadlock issue.

I know many other people in my local user group that have written off Gunicorn on Heroku thinking the issue was Gunicorn and/or New Relic. However, the problem is in Postgres and it is exacerbated when the NR agent is used.

While we (GreatBizTools) has a workaround in place, it would be great if Heroku (and maybe you can get New Relic) to poke at the Postgres folks to fix this deadlock into that was reported to them in October, 2014. I can't image the number of folks that have been caught out in the rain in the past year or so by this. There are several blog posts that (now wrongly) point fingers at Heroku for not supporting Gunicorn correctly.

We can't be the only people to use Heroku on Python with New Relic and Postgres? It must be a really popular combination.

---

Edit: Heroku is now aware of the issue and is working with respective parties in order to rectify this low level issue. -- January 16th, 2015

Monday, January 12, 2015

Gunicorn dyno death spiral on Heroku

FYI -- Gunicorn dyno death spiral on Heroku -- Part II is now available

-----

We recently released our app XXXX on Heroku using Gunicorn however we quickly found in even the most modest of production load (as little as 10 users) that some dynos would stop responding and start throwing continuous H12 errors for hours.

We experience three separate events (from January 5-6) where one or more dynos would stop serving requests with Gunicorn and throw H12 errors for every request and the load metrics would spike from .2-.5 to 1.5 or higher on that particular dyno. The only remedy was to manually run heroku ps:restart web.X after reading logs and kill the appropriate dyno.

We experienced the same issue as outlined on this thread on the Heroku forums:

https://discussion.heroku.com/t/gunicorn-dyno-death-spiral/136

We were able to track it down to "bad clients" using the application -- they were always Verizon Wireless or Sprint Mobility aircards on laptop computers. We have a single client using this application so it was easy to confirm with them that the reverse IP was indeed Verizon Wireless or Sprint.

Our guess is that a client would not close a connect or respond with ACK messages for the streamed response and therefore exceed the 30 second limit. When Heroku performed an H12 on it, it left the worker on Gunicorn to continue working -- left tied up in an unrecoverable state. This would repeatedly happen (we were only running 3 workers per dyno) until all workers on a single dyno stopped responding. At this point, the the routing mesh would continue routing requests to this rogue dyno but the dyno would just return H12s until it was manually restarted.

We have confirmed it is NOT our application code. The application runs just fine when Gunicorn is swapped out with Uwsgi (we also tested Waitress with success as well). Currently, we are running Uwsgi on the XXXX application since the evening of January 6th. We have not experience any more events where dynos would death spiral out of control after switching to Uwsgi permanently. We still occasionally see a bad client and request -- however we are using the Harakiri option in Uwsgi and the rogue worker is killed and respawned after 25 seconds.

The question we have is why Heroku continues to recommend using Gunicorn when other people like ourselves have experienced terrible results with this particular application server.

Tuesday, December 30, 2014

Python / Django / Selenium: Set viewport size (window size)

The default viewport size for PhantomJs is like a phone size width. I found plenty of examples of setting the viewport size of the window for Java, C# and Ruby but not much for Python.  It's ridiculously simple.  Below is an example basically for my future reminder, but here for your enjoyment. This sets the viewport 1280px wide by 720px high.
CustomLiveServerTestCase(LiveServerTestCase):
def setUp(self):
self.wd = webdriver.PhantomJS()
self.wd.set_window_size(1280, 720)
super(CustomLiveServerTestCase, self).setUp()

def tearDown(self):
super(CustomLiveServerTestCase, self).tearDown()
self.wd.close()