Posts tagged: twitter

Annoying notifications from Twitter on mobile

Lately I have been receiving a lot of annoying notifications from Twitter on my phone. Primarily from @twibbon Try as I might, I couldn’t work out how to turn them off. Even disabling notifications for the Twitter app entirely didn’t stop them.

Luckily Mashable came to the rescue this morning with this post:

http://mashable.com/2014/05/24/stop-annoying-twitter-notifications/

 

{lang: 'en-GB'}

Astro Soichi

Astro Soichi, or Soichi Noguchi 野口 聡一, took part in a long duration space flight from Dec/2009 to Jun/2010 on the International Space Station. During his stay aboard the ISS, he tweeted and twitpiced images of the earth. His twitter address is @astro_soichi, and his photos can still be seen on Twitpic here: http://twitpic.com/photos/Astro_Soichi.

I wanted to store a local copy of his images to look at later, so I wrote a simple python script to download them from twitpic. To use the script, you also need to install BeautifulSoup and workerpool.

Note that this script is provided “as is” and without warranty. If running it turns your PC into a cardboard box, that is entirely your own fault. 🙂

#!/usr/bin/env python
"""
   Copyright 2010 RoaringMoon.com
   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
"""
from BeautifulSoup import BeautifulSoup
import re
import urllib
import os
import sys
import time
import string
import workerpool
valid_chars = "-_.,() %s%s" % (string.ascii_letters, string.digits)
url_pattern = re.compile("(?:([^:/?#]+):)?(?://([^/?#]*))?([^?#]*\.(?:jpg))(?:\?([^#]*))?(?:#(.*))?")
working_dir = os.getcwd()
username = "Astro_Soichi"
class DownloadJob(workerpool.Job):
    "Job for downloading a given URL."
    def __init__(self, url, save_to):
        self.url = url # The url we'll need to download when the job runs
	self.save_to = save_to
    def run(self):
        urllib.urlretrieve(self.url, self.save_to)
def get_next_page(div_soup = BeautifulSoup('')):
    nav_links = div_soup.findAll('a',attrs={'class':"nav-link"})
    for nlink in nav_links:
        if nlink.text == u'More photos >':
            return nlink.get('href')
    return False
def main():
    # Initialize a pool, 5 threads in this case
    pool = workerpool.WorkerPool(size=5)
    opener = urllib.FancyURLopener({})
    f = opener.open("http://twitpic.com/photos/%s" % username)
    s = f.read()
    f.close()
    div_soup = BeautifulSoup(''.join(s))
    next_page = True
    div_collection = div_soup.findAll('div',attrs={'class':"profile-photo-img"})
    while next_page:
	for item in div_collection:
	    f = opener.open("http://twitpic.com%s" % item.a.get('href'))
	    s = f.read()
	    f.close()
	    soup = BeautifulSoup(''.join(s))
	    img_name = soup.html.head.title.text
	    img_name = img_name[:len(img_name) - 10]
	    photo_info = soup.findAll('div',attrs={'id':"photo-info"})
	    try:
		photo_date = photo_info[0].find('div')
	    except IndexError:
		soup.decompose()
		continue
	    img_name = "%s %s" % (img_name, photo_date.text)
	    soup.decompose()
	    f = opener.open("http://twitpic.com%s/full" % item.a.get('href'))
	    s = f.read()
	    f.close()
	    soup = BeautifulSoup(''.join(s))
	    s = soup.findAll('img')
	    for val in s:
		m = url_pattern.match(val.get('src'))
		if m:
		    full_image = m.group(0)
	    if full_image:
		local_filename = "%s\\%s - %s.jpg" % (working_dir, username, ''.join(c for c in img_name if c in valid_chars))
		job = DownloadJob(full_image, local_filename)
		pool.put(job)
	    soup.decompose()
	next_page = get_next_page(div_soup)
	if next_page:
	    f = opener.open("http://twitpic.com%s" % next_page)
	    s = f.read()
	    f.close()
	    div_soup.decompose()
	    div_soup = BeautifulSoup(''.join(s))
	    div_collection = div_soup.findAll('div',attrs={'class':"profile-photo-img"})
	else:
	    div_soup.decompose()
    pool.shutdown()
    pool.wait()
if __name__ == '__main__':
      main()

By changing the username, the script can be used to create a local backup of your own twitpic collection. Be aware that the method for ensuring that the filename can be handled by NTFS is very crude. The full script can be downloaded here.

{lang: 'en-GB'}

The Joys of Twitter

I have recently rediscovered the joys of twitter, so you can now follow my inane ramblings at:

http://twitter.com/vivaceuk

{lang: 'en-GB'}

WordPress Themes