2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* The low performance USB storage driver (ub).
|
|
|
|
*
|
|
|
|
* Copyright (c) 1999, 2000 Matthew Dharm (mdharm-usb@one-eyed-alien.net)
|
|
|
|
* Copyright (C) 2004 Pete Zaitcev (zaitcev@yahoo.com)
|
|
|
|
*
|
|
|
|
* This work is a part of Linux kernel, is derived from it,
|
|
|
|
* and is not licensed separately. See file COPYING for details.
|
|
|
|
*
|
|
|
|
* TODO (sorted by decreasing priority)
|
2008-04-09 00:41:51 +00:00
|
|
|
* -- Return sense now that rq allows it (we always auto-sense anyway).
|
2005-04-16 22:20:36 +00:00
|
|
|
* -- set readonly flag for CDs, set removable flag for CF readers
|
|
|
|
* -- do inquiry and verify we got a disk and not a tape (for LUN mismatch)
|
|
|
|
* -- verify the 13 conditions and do bulk resets
|
2005-07-31 05:38:30 +00:00
|
|
|
* -- highmem
|
2005-04-16 22:20:36 +00:00
|
|
|
* -- move top_sense and work_bcs into separate allocations (if they survive)
|
|
|
|
* for cache purists and esoteric architectures.
|
2005-07-31 05:38:30 +00:00
|
|
|
* -- Allocate structure for LUN 0 before the first ub_sync_tur, avoid NULL. ?
|
2005-04-16 22:20:36 +00:00
|
|
|
* -- prune comments, they are too volumnous
|
|
|
|
* -- Resove XXX's
|
2005-07-27 18:43:51 +00:00
|
|
|
* -- CLEAR, CLR2STS, CLRRS seem to be ripe for refactoring.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
#include <linux/kernel.h>
|
|
|
|
#include <linux/module.h>
|
|
|
|
#include <linux/usb.h>
|
2005-10-23 03:15:09 +00:00
|
|
|
#include <linux/usb_usual.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/blkdev.h>
|
|
|
|
#include <linux/timer.h>
|
2007-10-22 19:19:53 +00:00
|
|
|
#include <linux/scatterlist.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 08:04:11 +00:00
|
|
|
#include <linux/slab.h>
|
2010-06-02 12:28:52 +00:00
|
|
|
#include <linux/mutex.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <scsi/scsi.h>
|
|
|
|
|
|
|
|
#define DRV_NAME "ub"
|
|
|
|
|
|
|
|
#define UB_MAJOR 180
|
|
|
|
|
2005-07-27 18:43:51 +00:00
|
|
|
/*
|
|
|
|
* The command state machine is the key model for understanding of this driver.
|
|
|
|
*
|
|
|
|
* The general rule is that all transitions are done towards the bottom
|
|
|
|
* of the diagram, thus preventing any loops.
|
|
|
|
*
|
|
|
|
* An exception to that is how the STAT state is handled. A counter allows it
|
|
|
|
* to be re-entered along the path marked with [C].
|
|
|
|
*
|
|
|
|
* +--------+
|
|
|
|
* ! INIT !
|
|
|
|
* +--------+
|
|
|
|
* !
|
|
|
|
* ub_scsi_cmd_start fails ->--------------------------------------\
|
|
|
|
* ! !
|
|
|
|
* V !
|
|
|
|
* +--------+ !
|
|
|
|
* ! CMD ! !
|
|
|
|
* +--------+ !
|
|
|
|
* ! +--------+ !
|
|
|
|
* was -EPIPE -->-------------------------------->! CLEAR ! !
|
|
|
|
* ! +--------+ !
|
|
|
|
* ! ! !
|
|
|
|
* was error -->------------------------------------- ! --------->\
|
|
|
|
* ! ! !
|
|
|
|
* /--<-- cmd->dir == NONE ? ! !
|
|
|
|
* ! ! ! !
|
|
|
|
* ! V ! !
|
|
|
|
* ! +--------+ ! !
|
|
|
|
* ! ! DATA ! ! !
|
|
|
|
* ! +--------+ ! !
|
|
|
|
* ! ! +---------+ ! !
|
|
|
|
* ! was -EPIPE -->--------------->! CLR2STS ! ! !
|
|
|
|
* ! ! +---------+ ! !
|
|
|
|
* ! ! ! ! !
|
|
|
|
* ! ! was error -->---- ! --------->\
|
|
|
|
* ! was error -->--------------------- ! ------------- ! --------->\
|
|
|
|
* ! ! ! ! !
|
|
|
|
* ! V ! ! !
|
|
|
|
* \--->+--------+ ! ! !
|
|
|
|
* ! STAT !<--------------------------/ ! !
|
|
|
|
* /--->+--------+ ! !
|
|
|
|
* ! ! ! !
|
|
|
|
* [C] was -EPIPE -->-----------\ ! !
|
|
|
|
* ! ! ! ! !
|
|
|
|
* +<---- len == 0 ! ! !
|
|
|
|
* ! ! ! ! !
|
|
|
|
* ! was error -->--------------------------------------!---------->\
|
|
|
|
* ! ! ! ! !
|
|
|
|
* +<---- bad CSW ! ! !
|
|
|
|
* +<---- bad tag ! ! !
|
|
|
|
* ! ! V ! !
|
|
|
|
* ! ! +--------+ ! !
|
|
|
|
* ! ! ! CLRRS ! ! !
|
|
|
|
* ! ! +--------+ ! !
|
|
|
|
* ! ! ! ! !
|
|
|
|
* \------- ! --------------------[C]--------\ ! !
|
|
|
|
* ! ! ! !
|
|
|
|
* cmd->error---\ +--------+ ! !
|
|
|
|
* ! +--------------->! SENSE !<----------/ !
|
|
|
|
* STAT_FAIL----/ +--------+ !
|
|
|
|
* ! ! V
|
|
|
|
* ! V +--------+
|
|
|
|
* \--------------------------------\--------------------->! DONE !
|
|
|
|
* +--------+
|
|
|
|
*/
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
2005-05-01 23:05:40 +00:00
|
|
|
* This many LUNs per USB device.
|
|
|
|
* Every one of them takes a host, see UB_MAX_HOSTS.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
2005-06-06 20:54:59 +00:00
|
|
|
#define UB_MAX_LUNS 9
|
2005-05-01 23:05:40 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
*/
|
|
|
|
|
2005-12-17 10:34:12 +00:00
|
|
|
#define UB_PARTS_PER_LUN 8
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
#define UB_MAX_CDB_SIZE 16 /* Corresponds to Bulk */
|
|
|
|
|
|
|
|
#define UB_SENSE_SIZE 18
|
|
|
|
|
|
|
|
/*
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* command block wrapper */
|
|
|
|
struct bulk_cb_wrap {
|
|
|
|
__le32 Signature; /* contains 'USBC' */
|
|
|
|
u32 Tag; /* unique per command id */
|
|
|
|
__le32 DataTransferLength; /* size of data */
|
|
|
|
u8 Flags; /* direction in bit 0 */
|
2005-05-01 23:05:40 +00:00
|
|
|
u8 Lun; /* LUN */
|
2005-04-16 22:20:36 +00:00
|
|
|
u8 Length; /* of of the CDB */
|
|
|
|
u8 CDB[UB_MAX_CDB_SIZE]; /* max command */
|
|
|
|
};
|
|
|
|
|
|
|
|
#define US_BULK_CB_WRAP_LEN 31
|
|
|
|
#define US_BULK_CB_SIGN 0x43425355 /*spells out USBC */
|
|
|
|
#define US_BULK_FLAG_IN 1
|
|
|
|
#define US_BULK_FLAG_OUT 0
|
|
|
|
|
|
|
|
/* command status wrapper */
|
|
|
|
struct bulk_cs_wrap {
|
|
|
|
__le32 Signature; /* should = 'USBS' */
|
|
|
|
u32 Tag; /* same as original command */
|
|
|
|
__le32 Residue; /* amount not transferred */
|
|
|
|
u8 Status; /* see below */
|
|
|
|
};
|
|
|
|
|
|
|
|
#define US_BULK_CS_WRAP_LEN 13
|
|
|
|
#define US_BULK_CS_SIGN 0x53425355 /* spells out 'USBS' */
|
|
|
|
#define US_BULK_STAT_OK 0
|
|
|
|
#define US_BULK_STAT_FAIL 1
|
|
|
|
#define US_BULK_STAT_PHASE 2
|
|
|
|
|
|
|
|
/* bulk-only class specific requests */
|
|
|
|
#define US_BULK_RESET_REQUEST 0xff
|
|
|
|
#define US_BULK_GET_MAX_LUN 0xfe
|
|
|
|
|
|
|
|
/*
|
|
|
|
*/
|
|
|
|
struct ub_dev;
|
|
|
|
|
2005-09-22 07:48:29 +00:00
|
|
|
#define UB_MAX_REQ_SG 9 /* cdrecord requires 32KB and maybe a header */
|
2005-04-16 22:20:36 +00:00
|
|
|
#define UB_MAX_SECTORS 64
|
|
|
|
|
|
|
|
/*
|
|
|
|
* A second is more than enough for a 32K transfer (UB_MAX_SECTORS)
|
|
|
|
* even if a webcam hogs the bus, but some devices need time to spin up.
|
|
|
|
*/
|
|
|
|
#define UB_URB_TIMEOUT (HZ*2)
|
|
|
|
#define UB_DATA_TIMEOUT (HZ*5) /* ZIP does spin-ups in the data phase */
|
|
|
|
#define UB_STAT_TIMEOUT (HZ*5) /* Same spinups and eject for a dataless cmd. */
|
|
|
|
#define UB_CTRL_TIMEOUT (HZ/2) /* 500ms ought to be enough to clear a stall */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* An instance of a SCSI command in transit.
|
|
|
|
*/
|
|
|
|
#define UB_DIR_NONE 0
|
|
|
|
#define UB_DIR_READ 1
|
|
|
|
#define UB_DIR_ILLEGAL2 2
|
|
|
|
#define UB_DIR_WRITE 3
|
|
|
|
|
|
|
|
#define UB_DIR_CHAR(c) (((c)==UB_DIR_WRITE)? 'w': \
|
|
|
|
(((c)==UB_DIR_READ)? 'r': 'n'))
|
|
|
|
|
|
|
|
enum ub_scsi_cmd_state {
|
|
|
|
UB_CMDST_INIT, /* Initial state */
|
|
|
|
UB_CMDST_CMD, /* Command submitted */
|
|
|
|
UB_CMDST_DATA, /* Data phase */
|
|
|
|
UB_CMDST_CLR2STS, /* Clearing before requesting status */
|
|
|
|
UB_CMDST_STAT, /* Status phase */
|
|
|
|
UB_CMDST_CLEAR, /* Clearing a stall (halt, actually) */
|
2005-07-27 18:43:51 +00:00
|
|
|
UB_CMDST_CLRRS, /* Clearing before retrying status */
|
2005-04-16 22:20:36 +00:00
|
|
|
UB_CMDST_SENSE, /* Sending Request Sense */
|
|
|
|
UB_CMDST_DONE /* Final state */
|
|
|
|
};
|
|
|
|
|
|
|
|
struct ub_scsi_cmd {
|
|
|
|
unsigned char cdb[UB_MAX_CDB_SIZE];
|
|
|
|
unsigned char cdb_len;
|
|
|
|
|
|
|
|
unsigned char dir; /* 0 - none, 1 - read, 3 - write. */
|
|
|
|
enum ub_scsi_cmd_state state;
|
|
|
|
unsigned int tag;
|
|
|
|
struct ub_scsi_cmd *next;
|
|
|
|
|
|
|
|
int error; /* Return code - valid upon done */
|
|
|
|
unsigned int act_len; /* Return size */
|
|
|
|
unsigned char key, asc, ascq; /* May be valid if error==-EIO */
|
|
|
|
|
|
|
|
int stat_count; /* Retries getting status. */
|
2008-04-19 21:32:18 +00:00
|
|
|
unsigned int timeo; /* jiffies until rq->timeout changes */
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
unsigned int len; /* Requested length */
|
2005-08-15 04:16:03 +00:00
|
|
|
unsigned int current_sg;
|
|
|
|
unsigned int nsg; /* sgv[nsg] */
|
|
|
|
struct scatterlist sgv[UB_MAX_REQ_SG];
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
struct ub_lun *lun;
|
2005-04-16 22:20:36 +00:00
|
|
|
void (*done)(struct ub_dev *, struct ub_scsi_cmd *);
|
|
|
|
void *back;
|
|
|
|
};
|
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
struct ub_request {
|
|
|
|
struct request *rq;
|
|
|
|
unsigned int current_try;
|
|
|
|
unsigned int nsg; /* sgv[nsg] */
|
|
|
|
struct scatterlist sgv[UB_MAX_REQ_SG];
|
|
|
|
};
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
*/
|
|
|
|
struct ub_capacity {
|
|
|
|
unsigned long nsec; /* Linux size - 512 byte sectors */
|
|
|
|
unsigned int bsize; /* Linux hardsect_size */
|
|
|
|
unsigned int bshift; /* Shift between 512 and hard sects */
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is a direct take-off from linux/include/completion.h
|
|
|
|
* The difference is that I do not wait on this thing, just poll.
|
|
|
|
* When I want to wait (ub_probe), I just use the stock completion.
|
|
|
|
*
|
|
|
|
* Note that INIT_COMPLETION takes no lock. It is correct. But why
|
|
|
|
* in the bloody hell that thing takes struct instead of pointer to struct
|
|
|
|
* is quite beyond me. I just copied it from the stock completion.
|
|
|
|
*/
|
|
|
|
struct ub_completion {
|
|
|
|
unsigned int done;
|
|
|
|
spinlock_t lock;
|
|
|
|
};
|
|
|
|
|
2010-06-02 12:28:52 +00:00
|
|
|
static DEFINE_MUTEX(ub_mutex);
|
2005-04-16 22:20:36 +00:00
|
|
|
static inline void ub_init_completion(struct ub_completion *x)
|
|
|
|
{
|
|
|
|
x->done = 0;
|
|
|
|
spin_lock_init(&x->lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
#define UB_INIT_COMPLETION(x) ((x).done = 0)
|
|
|
|
|
|
|
|
static void ub_complete(struct ub_completion *x)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&x->lock, flags);
|
|
|
|
x->done++;
|
|
|
|
spin_unlock_irqrestore(&x->lock, flags);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ub_is_completed(struct ub_completion *x)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&x->lock, flags);
|
|
|
|
ret = x->done;
|
|
|
|
spin_unlock_irqrestore(&x->lock, flags);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
*/
|
|
|
|
struct ub_scsi_cmd_queue {
|
|
|
|
int qlen, qmax;
|
|
|
|
struct ub_scsi_cmd *head, *tail;
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
2005-05-01 23:05:40 +00:00
|
|
|
* The block device instance (one per LUN).
|
|
|
|
*/
|
|
|
|
struct ub_lun {
|
|
|
|
struct ub_dev *udev;
|
|
|
|
struct list_head link;
|
|
|
|
struct gendisk *disk;
|
|
|
|
int id; /* Host index */
|
|
|
|
int num; /* LUN number */
|
|
|
|
char name[16];
|
|
|
|
|
|
|
|
int changed; /* Media was changed */
|
|
|
|
int removable;
|
|
|
|
int readonly;
|
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
struct ub_request urq;
|
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
/* Use Ingo's mempool if or when we have more than one command. */
|
|
|
|
/*
|
|
|
|
* Currently we never need more than one command for the whole device.
|
|
|
|
* However, giving every LUN a command is a cheap and automatic way
|
|
|
|
* to enforce fairness between them.
|
|
|
|
*/
|
|
|
|
int cmda[1];
|
|
|
|
struct ub_scsi_cmd cmdv[1];
|
|
|
|
|
|
|
|
struct ub_capacity capacity;
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The USB device instance.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
struct ub_dev {
|
2005-12-28 22:22:17 +00:00
|
|
|
spinlock_t *lock;
|
2005-04-16 22:20:36 +00:00
|
|
|
atomic_t poison; /* The USB device is disconnected */
|
|
|
|
int openc; /* protected by ub_lock! */
|
|
|
|
/* kref is too implicit for our taste */
|
2005-12-17 10:16:43 +00:00
|
|
|
int reset; /* Reset is running */
|
2008-04-19 21:42:49 +00:00
|
|
|
int bad_resid;
|
2005-04-16 22:20:36 +00:00
|
|
|
unsigned int tagcnt;
|
2005-05-01 23:05:40 +00:00
|
|
|
char name[12];
|
2005-04-16 22:20:36 +00:00
|
|
|
struct usb_device *dev;
|
|
|
|
struct usb_interface *intf;
|
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
struct list_head luns;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
unsigned int send_bulk_pipe; /* cached pipe values */
|
|
|
|
unsigned int recv_bulk_pipe;
|
|
|
|
unsigned int send_ctrl_pipe;
|
|
|
|
unsigned int recv_ctrl_pipe;
|
|
|
|
|
|
|
|
struct tasklet_struct tasklet;
|
|
|
|
|
|
|
|
struct ub_scsi_cmd_queue cmd_queue;
|
|
|
|
struct ub_scsi_cmd top_rqs_cmd; /* REQUEST SENSE */
|
|
|
|
unsigned char top_sense[UB_SENSE_SIZE];
|
|
|
|
|
|
|
|
struct ub_completion work_done;
|
|
|
|
struct urb work_urb;
|
|
|
|
struct timer_list work_timer;
|
|
|
|
int last_pipe; /* What might need clearing */
|
2005-07-27 18:43:51 +00:00
|
|
|
__le32 signature; /* Learned signature */
|
2005-04-16 22:20:36 +00:00
|
|
|
struct bulk_cb_wrap work_bcb;
|
|
|
|
struct bulk_cs_wrap work_bcs;
|
|
|
|
struct usb_ctrlrequest work_cr;
|
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
struct work_struct reset_work;
|
|
|
|
wait_queue_head_t reset_wait;
|
2005-04-16 22:20:36 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
*/
|
|
|
|
static void ub_cleanup(struct ub_dev *sc);
|
2005-07-31 05:51:52 +00:00
|
|
|
static int ub_request_fn_1(struct ub_lun *lun, struct request *rq);
|
2005-12-17 10:16:43 +00:00
|
|
|
static void ub_cmd_build_block(struct ub_dev *sc, struct ub_lun *lun,
|
|
|
|
struct ub_scsi_cmd *cmd, struct ub_request *urq);
|
|
|
|
static void ub_cmd_build_packet(struct ub_dev *sc, struct ub_lun *lun,
|
|
|
|
struct ub_scsi_cmd *cmd, struct ub_request *urq);
|
2005-04-16 22:20:36 +00:00
|
|
|
static void ub_rw_cmd_done(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
|
2009-05-19 09:33:04 +00:00
|
|
|
static void ub_end_rq(struct request *rq, unsigned int status);
|
2005-12-17 10:16:43 +00:00
|
|
|
static int ub_rw_cmd_retry(struct ub_dev *sc, struct ub_lun *lun,
|
|
|
|
struct ub_request *urq, struct ub_scsi_cmd *cmd);
|
2005-04-16 22:20:36 +00:00
|
|
|
static int ub_submit_scsi(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
|
IRQ: Maintain regs pointer globally rather than passing to IRQ handlers
Maintain a per-CPU global "struct pt_regs *" variable which can be used instead
of passing regs around manually through all ~1800 interrupt handlers in the
Linux kernel.
The regs pointer is used in few places, but it potentially costs both stack
space and code to pass it around. On the FRV arch, removing the regs parameter
from all the genirq function results in a 20% speed up of the IRQ exit path
(ie: from leaving timer_interrupt() to leaving do_IRQ()).
Where appropriate, an arch may override the generic storage facility and do
something different with the variable. On FRV, for instance, the address is
maintained in GR28 at all times inside the kernel as part of general exception
handling.
Having looked over the code, it appears that the parameter may be handed down
through up to twenty or so layers of functions. Consider a USB character
device attached to a USB hub, attached to a USB controller that posts its
interrupts through a cascaded auxiliary interrupt controller. A character
device driver may want to pass regs to the sysrq handler through the input
layer which adds another few layers of parameter passing.
I've build this code with allyesconfig for x86_64 and i386. I've runtested the
main part of the code on FRV and i386, though I can't test most of the drivers.
I've also done partial conversion for powerpc and MIPS - these at least compile
with minimal configurations.
This will affect all archs. Mostly the changes should be relatively easy.
Take do_IRQ(), store the regs pointer at the beginning, saving the old one:
struct pt_regs *old_regs = set_irq_regs(regs);
And put the old one back at the end:
set_irq_regs(old_regs);
Don't pass regs through to generic_handle_irq() or __do_IRQ().
In timer_interrupt(), this sort of change will be necessary:
- update_process_times(user_mode(regs));
- profile_tick(CPU_PROFILING, regs);
+ update_process_times(user_mode(get_irq_regs()));
+ profile_tick(CPU_PROFILING);
I'd like to move update_process_times()'s use of get_irq_regs() into itself,
except that i386, alone of the archs, uses something other than user_mode().
Some notes on the interrupt handling in the drivers:
(*) input_dev() is now gone entirely. The regs pointer is no longer stored in
the input_dev struct.
(*) finish_unlinks() in drivers/usb/host/ohci-q.c needs checking. It does
something different depending on whether it's been supplied with a regs
pointer or not.
(*) Various IRQ handler function pointers have been moved to type
irq_handler_t.
Signed-Off-By: David Howells <dhowells@redhat.com>
(cherry picked from 1b16e7ac850969f38b375e511e3fa2f474a33867 commit)
2006-10-05 13:55:46 +00:00
|
|
|
static void ub_urb_complete(struct urb *urb);
|
2005-04-16 22:20:36 +00:00
|
|
|
static void ub_scsi_action(unsigned long _dev);
|
|
|
|
static void ub_scsi_dispatch(struct ub_dev *sc);
|
|
|
|
static void ub_scsi_urb_compl(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
|
2005-08-15 04:16:03 +00:00
|
|
|
static void ub_data_start(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
|
2005-04-16 22:20:36 +00:00
|
|
|
static void ub_state_done(struct ub_dev *sc, struct ub_scsi_cmd *cmd, int rc);
|
2005-07-27 18:43:51 +00:00
|
|
|
static int __ub_state_stat(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
|
2005-04-16 22:20:36 +00:00
|
|
|
static void ub_state_stat(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
|
2005-07-27 18:43:51 +00:00
|
|
|
static void ub_state_stat_counted(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
|
2005-04-16 22:20:36 +00:00
|
|
|
static void ub_state_sense(struct ub_dev *sc, struct ub_scsi_cmd *cmd);
|
|
|
|
static int ub_submit_clear_stall(struct ub_dev *sc, struct ub_scsi_cmd *cmd,
|
|
|
|
int stalled_pipe);
|
|
|
|
static void ub_top_sense_done(struct ub_dev *sc, struct ub_scsi_cmd *scmd);
|
2006-01-05 08:26:30 +00:00
|
|
|
static void ub_reset_enter(struct ub_dev *sc, int try);
|
2006-11-22 14:57:56 +00:00
|
|
|
static void ub_reset_task(struct work_struct *work);
|
2005-05-01 23:05:40 +00:00
|
|
|
static int ub_sync_tur(struct ub_dev *sc, struct ub_lun *lun);
|
|
|
|
static int ub_sync_read_cap(struct ub_dev *sc, struct ub_lun *lun,
|
|
|
|
struct ub_capacity *ret);
|
2006-01-05 08:26:30 +00:00
|
|
|
static int ub_sync_reset(struct ub_dev *sc);
|
|
|
|
static int ub_probe_clear_stall(struct ub_dev *sc, int stalled_pipe);
|
2005-05-01 23:05:40 +00:00
|
|
|
static int ub_probe_lun(struct ub_dev *sc, int lnum);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
*/
|
2005-10-23 03:15:09 +00:00
|
|
|
#ifdef CONFIG_USB_LIBUSUAL
|
|
|
|
|
usb-storage: prepare for subdriver separation
This patch (as1206) is the first step in converting usb-storage's
subdrivers into separate modules. It makes the following large-scale
changes:
Remove a bunch of unnecessary #ifdef's from usb_usual.h.
Not truly necessary, but it does clean things up.
Move the USB device-ID table (which is duplicated between
libusual and usb-storage) into its own source file,
usual-tables.c, and arrange for this to be linked with
either libusual or usb-storage according to whether
USB_LIBUSUAL is configured.
Add to usual-tables.c a new usb_usual_ignore_device()
function to detect whether a particular device needs to be
managed by a subdriver and not by the standard handlers
in usb-storage.
Export a whole bunch of functions in usb-storage, renaming
some of them because their names don't already begin with
"usb_stor_". These functions will be needed by the new
subdriver modules.
Split usb-storage's probe routine into two functions.
The subdrivers will call the probe1 routine, then fill in
their transport and protocol settings, and then call the
probe2 routine.
Take the default cases and error checking out of
get_transport() and get_protocol(), which run during
probe1, and instead put a check for invalid transport
or protocol values into the probe2 function.
Add a new probe routine to be used for standard devices,
i.e., those that don't need a subdriver. This new routine
checks whether the device should be ignored (because it
should be handled by ub or by a subdriver), and if not,
calls the probe1 and probe2 functions.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
CC: Matthew Dharm <mdharm-usb@one-eyed-alien.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-02-12 19:47:44 +00:00
|
|
|
#define ub_usb_ids usb_storage_usb_ids
|
2005-10-23 03:15:09 +00:00
|
|
|
#else
|
|
|
|
|
2010-01-10 12:39:39 +00:00
|
|
|
static const struct usb_device_id ub_usb_ids[] = {
|
2010-10-07 11:05:21 +00:00
|
|
|
{ USB_INTERFACE_INFO(USB_CLASS_MASS_STORAGE, USB_SC_SCSI, USB_PR_BULK) },
|
2005-04-16 22:20:36 +00:00
|
|
|
{ }
|
|
|
|
};
|
|
|
|
|
|
|
|
MODULE_DEVICE_TABLE(usb, ub_usb_ids);
|
2005-10-23 03:15:09 +00:00
|
|
|
#endif /* CONFIG_USB_LIBUSUAL */
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Find me a way to identify "next free minor" for add_disk(),
|
|
|
|
* and the array disappears the next day. However, the number of
|
|
|
|
* hosts has something to do with the naming and /proc/partitions.
|
|
|
|
* This has to be thought out in detail before changing.
|
|
|
|
* If UB_MAX_HOST was 1000, we'd use a bitmap. Or a better data structure.
|
|
|
|
*/
|
|
|
|
#define UB_MAX_HOSTS 26
|
|
|
|
static char ub_hostv[UB_MAX_HOSTS];
|
2005-05-01 23:05:40 +00:00
|
|
|
|
2005-12-28 22:22:17 +00:00
|
|
|
#define UB_QLOCK_NUM 5
|
|
|
|
static spinlock_t ub_qlockv[UB_QLOCK_NUM];
|
|
|
|
static int ub_qlock_next = 0;
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
static DEFINE_SPINLOCK(ub_lock); /* Locks globals and ->openc */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The id allocator.
|
|
|
|
*
|
|
|
|
* This also stores the host for indexing by minor, which is somewhat dirty.
|
|
|
|
*/
|
|
|
|
static int ub_id_get(void)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&ub_lock, flags);
|
|
|
|
for (i = 0; i < UB_MAX_HOSTS; i++) {
|
|
|
|
if (ub_hostv[i] == 0) {
|
|
|
|
ub_hostv[i] = 1;
|
|
|
|
spin_unlock_irqrestore(&ub_lock, flags);
|
|
|
|
return i;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
spin_unlock_irqrestore(&ub_lock, flags);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ub_id_put(int id)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
if (id < 0 || id >= UB_MAX_HOSTS) {
|
|
|
|
printk(KERN_ERR DRV_NAME ": bad host ID %d\n", id);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
spin_lock_irqsave(&ub_lock, flags);
|
|
|
|
if (ub_hostv[id] == 0) {
|
|
|
|
spin_unlock_irqrestore(&ub_lock, flags);
|
|
|
|
printk(KERN_ERR DRV_NAME ": freeing free host ID %d\n", id);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
ub_hostv[id] = 0;
|
|
|
|
spin_unlock_irqrestore(&ub_lock, flags);
|
|
|
|
}
|
|
|
|
|
2005-12-28 22:22:17 +00:00
|
|
|
/*
|
|
|
|
* This is necessitated by the fact that blk_cleanup_queue does not
|
|
|
|
* necesserily destroy the queue. Instead, it may merely decrease q->refcnt.
|
|
|
|
* Since our blk_init_queue() passes a spinlock common with ub_dev,
|
|
|
|
* we have life time issues when ub_cleanup frees ub_dev.
|
|
|
|
*/
|
|
|
|
static spinlock_t *ub_next_lock(void)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
spinlock_t *ret;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&ub_lock, flags);
|
|
|
|
ret = &ub_qlockv[ub_qlock_next];
|
|
|
|
ub_qlock_next = (ub_qlock_next + 1) % UB_QLOCK_NUM;
|
|
|
|
spin_unlock_irqrestore(&ub_lock, flags);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Downcount for deallocation. This rides on two assumptions:
|
|
|
|
* - once something is poisoned, its refcount cannot grow
|
|
|
|
* - opens cannot happen at this time (del_gendisk was done)
|
|
|
|
* If the above is true, we can drop the lock, which we need for
|
|
|
|
* blk_cleanup_queue(): the silly thing may attempt to sleep.
|
|
|
|
* [Actually, it never needs to sleep for us, but it calls might_sleep()]
|
|
|
|
*/
|
|
|
|
static void ub_put(struct ub_dev *sc)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&ub_lock, flags);
|
|
|
|
--sc->openc;
|
|
|
|
if (sc->openc == 0 && atomic_read(&sc->poison)) {
|
|
|
|
spin_unlock_irqrestore(&ub_lock, flags);
|
|
|
|
ub_cleanup(sc);
|
|
|
|
} else {
|
|
|
|
spin_unlock_irqrestore(&ub_lock, flags);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Final cleanup and deallocation.
|
|
|
|
*/
|
|
|
|
static void ub_cleanup(struct ub_dev *sc)
|
|
|
|
{
|
2005-05-01 23:05:40 +00:00
|
|
|
struct list_head *p;
|
|
|
|
struct ub_lun *lun;
|
2007-07-24 07:28:11 +00:00
|
|
|
struct request_queue *q;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
while (!list_empty(&sc->luns)) {
|
|
|
|
p = sc->luns.next;
|
|
|
|
lun = list_entry(p, struct ub_lun, link);
|
|
|
|
list_del(p);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
/* I don't think queue can be NULL. But... Stolen from sx8.c */
|
|
|
|
if ((q = lun->disk->queue) != NULL)
|
|
|
|
blk_cleanup_queue(q);
|
|
|
|
/*
|
|
|
|
* If we zero disk->private_data BEFORE put_disk, we have
|
|
|
|
* to check for NULL all over the place in open, release,
|
|
|
|
* check_media and revalidate, because the block level
|
|
|
|
* semaphore is well inside the put_disk.
|
|
|
|
* But we cannot zero after the call, because *disk is gone.
|
|
|
|
* The sd.c is blatantly racy in this area.
|
|
|
|
*/
|
|
|
|
/* disk->private_data = NULL; */
|
|
|
|
put_disk(lun->disk);
|
|
|
|
lun->disk = NULL;
|
|
|
|
|
|
|
|
ub_id_put(lun->id);
|
|
|
|
kfree(lun);
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2006-05-03 07:16:00 +00:00
|
|
|
usb_set_intfdata(sc->intf, NULL);
|
|
|
|
usb_put_intf(sc->intf);
|
|
|
|
usb_put_dev(sc->dev);
|
2005-04-16 22:20:36 +00:00
|
|
|
kfree(sc);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The "command allocator".
|
|
|
|
*/
|
2005-05-01 23:05:40 +00:00
|
|
|
static struct ub_scsi_cmd *ub_get_cmd(struct ub_lun *lun)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct ub_scsi_cmd *ret;
|
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
if (lun->cmda[0])
|
2005-04-16 22:20:36 +00:00
|
|
|
return NULL;
|
2005-05-01 23:05:40 +00:00
|
|
|
ret = &lun->cmdv[0];
|
|
|
|
lun->cmda[0] = 1;
|
2005-04-16 22:20:36 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
static void ub_put_cmd(struct ub_lun *lun, struct ub_scsi_cmd *cmd)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2005-05-01 23:05:40 +00:00
|
|
|
if (cmd != &lun->cmdv[0]) {
|
2005-04-16 22:20:36 +00:00
|
|
|
printk(KERN_WARNING "%s: releasing a foreign cmd %p\n",
|
2005-05-01 23:05:40 +00:00
|
|
|
lun->name, cmd);
|
2005-04-16 22:20:36 +00:00
|
|
|
return;
|
|
|
|
}
|
2005-05-01 23:05:40 +00:00
|
|
|
if (!lun->cmda[0]) {
|
|
|
|
printk(KERN_WARNING "%s: releasing a free cmd\n", lun->name);
|
2005-04-16 22:20:36 +00:00
|
|
|
return;
|
|
|
|
}
|
2005-05-01 23:05:40 +00:00
|
|
|
lun->cmda[0] = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The command queue.
|
|
|
|
*/
|
|
|
|
static void ub_cmdq_add(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
|
|
|
|
{
|
|
|
|
struct ub_scsi_cmd_queue *t = &sc->cmd_queue;
|
|
|
|
|
|
|
|
if (t->qlen++ == 0) {
|
|
|
|
t->head = cmd;
|
|
|
|
t->tail = cmd;
|
|
|
|
} else {
|
|
|
|
t->tail->next = cmd;
|
|
|
|
t->tail = cmd;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (t->qlen > t->qmax)
|
|
|
|
t->qmax = t->qlen;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ub_cmdq_insert(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
|
|
|
|
{
|
|
|
|
struct ub_scsi_cmd_queue *t = &sc->cmd_queue;
|
|
|
|
|
|
|
|
if (t->qlen++ == 0) {
|
|
|
|
t->head = cmd;
|
|
|
|
t->tail = cmd;
|
|
|
|
} else {
|
|
|
|
cmd->next = t->head;
|
|
|
|
t->head = cmd;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (t->qlen > t->qmax)
|
|
|
|
t->qmax = t->qlen;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct ub_scsi_cmd *ub_cmdq_pop(struct ub_dev *sc)
|
|
|
|
{
|
|
|
|
struct ub_scsi_cmd_queue *t = &sc->cmd_queue;
|
|
|
|
struct ub_scsi_cmd *cmd;
|
|
|
|
|
|
|
|
if (t->qlen == 0)
|
|
|
|
return NULL;
|
|
|
|
if (--t->qlen == 0)
|
|
|
|
t->tail = NULL;
|
|
|
|
cmd = t->head;
|
|
|
|
t->head = cmd->next;
|
|
|
|
cmd->next = NULL;
|
|
|
|
return cmd;
|
|
|
|
}
|
|
|
|
|
|
|
|
#define ub_cmdq_peek(sc) ((sc)->cmd_queue.head)
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The request function is our main entry point
|
|
|
|
*/
|
|
|
|
|
2007-07-24 07:28:11 +00:00
|
|
|
static void ub_request_fn(struct request_queue *q)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2005-05-01 23:05:40 +00:00
|
|
|
struct ub_lun *lun = q->queuedata;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct request *rq;
|
|
|
|
|
2009-05-08 02:54:16 +00:00
|
|
|
while ((rq = blk_peek_request(q)) != NULL) {
|
2005-07-31 05:51:52 +00:00
|
|
|
if (ub_request_fn_1(lun, rq) != 0) {
|
2005-04-16 22:20:36 +00:00
|
|
|
blk_stop_queue(q);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2005-07-31 05:51:52 +00:00
|
|
|
static int ub_request_fn_1(struct ub_lun *lun, struct request *rq)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2005-05-01 23:05:40 +00:00
|
|
|
struct ub_dev *sc = lun->udev;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct ub_scsi_cmd *cmd;
|
2005-12-17 10:16:43 +00:00
|
|
|
struct ub_request *urq;
|
|
|
|
int n_elem;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2006-09-07 23:54:22 +00:00
|
|
|
if (atomic_read(&sc->poison)) {
|
2009-05-08 02:54:16 +00:00
|
|
|
blk_start_request(rq);
|
2009-05-19 09:33:04 +00:00
|
|
|
ub_end_rq(rq, DID_NO_CONNECT << 16);
|
2006-09-07 23:54:22 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-06-19 15:26:47 +00:00
|
|
|
if (lun->changed && rq->cmd_type != REQ_TYPE_BLOCK_PC) {
|
2009-05-08 02:54:16 +00:00
|
|
|
blk_start_request(rq);
|
2009-05-19 09:33:04 +00:00
|
|
|
ub_end_rq(rq, SAM_STAT_CHECK_CONDITION);
|
2005-04-16 22:20:36 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
if (lun->urq.rq != NULL)
|
|
|
|
return -1;
|
2005-05-01 23:05:40 +00:00
|
|
|
if ((cmd = ub_get_cmd(lun)) == NULL)
|
2005-04-16 22:20:36 +00:00
|
|
|
return -1;
|
|
|
|
memset(cmd, 0, sizeof(struct ub_scsi_cmd));
|
|
|
|
|
2009-05-08 02:54:16 +00:00
|
|
|
blk_start_request(rq);
|
2005-12-17 10:16:43 +00:00
|
|
|
|
|
|
|
urq = &lun->urq;
|
|
|
|
memset(urq, 0, sizeof(struct ub_request));
|
|
|
|
urq->rq = rq;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* get scatterlist from block layer
|
|
|
|
*/
|
2008-02-09 08:10:17 +00:00
|
|
|
sg_init_table(&urq->sgv[0], UB_MAX_REQ_SG);
|
2005-12-17 10:16:43 +00:00
|
|
|
n_elem = blk_rq_map_sg(lun->disk->queue, rq, &urq->sgv[0]);
|
|
|
|
if (n_elem < 0) {
|
2006-05-26 03:08:50 +00:00
|
|
|
/* Impossible, because blk_rq_map_sg should not hit ENOMEM. */
|
2005-12-17 10:16:43 +00:00
|
|
|
printk(KERN_INFO "%s: failed request map (%d)\n",
|
2006-05-26 03:08:50 +00:00
|
|
|
lun->name, n_elem);
|
2005-12-17 10:16:43 +00:00
|
|
|
goto drop;
|
|
|
|
}
|
|
|
|
if (n_elem > UB_MAX_REQ_SG) { /* Paranoia */
|
|
|
|
printk(KERN_WARNING "%s: request with %d segments\n",
|
|
|
|
lun->name, n_elem);
|
|
|
|
goto drop;
|
|
|
|
}
|
|
|
|
urq->nsg = n_elem;
|
|
|
|
|
2010-08-07 16:17:56 +00:00
|
|
|
if (rq->cmd_type == REQ_TYPE_BLOCK_PC) {
|
2005-12-17 10:16:43 +00:00
|
|
|
ub_cmd_build_packet(sc, lun, cmd, urq);
|
2005-04-16 22:20:36 +00:00
|
|
|
} else {
|
2005-12-17 10:16:43 +00:00
|
|
|
ub_cmd_build_block(sc, lun, cmd, urq);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
cmd->state = UB_CMDST_INIT;
|
2005-05-01 23:05:40 +00:00
|
|
|
cmd->lun = lun;
|
2005-04-16 22:20:36 +00:00
|
|
|
cmd->done = ub_rw_cmd_done;
|
2005-12-17 10:16:43 +00:00
|
|
|
cmd->back = urq;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
cmd->tag = sc->tagcnt++;
|
2005-12-17 10:16:43 +00:00
|
|
|
if (ub_submit_scsi(sc, cmd) != 0)
|
|
|
|
goto drop;
|
|
|
|
|
|
|
|
return 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
drop:
|
|
|
|
ub_put_cmd(lun, cmd);
|
2009-05-19 09:33:04 +00:00
|
|
|
ub_end_rq(rq, DID_ERROR << 16);
|
2005-04-16 22:20:36 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
static void ub_cmd_build_block(struct ub_dev *sc, struct ub_lun *lun,
|
|
|
|
struct ub_scsi_cmd *cmd, struct ub_request *urq)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2005-12-17 10:16:43 +00:00
|
|
|
struct request *rq = urq->rq;
|
2005-08-15 04:16:03 +00:00
|
|
|
unsigned int block, nblks;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
if (rq_data_dir(rq) == WRITE)
|
2005-12-17 10:16:43 +00:00
|
|
|
cmd->dir = UB_DIR_WRITE;
|
2005-04-16 22:20:36 +00:00
|
|
|
else
|
2005-12-17 10:16:43 +00:00
|
|
|
cmd->dir = UB_DIR_READ;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
cmd->nsg = urq->nsg;
|
|
|
|
memcpy(cmd->sgv, urq->sgv, sizeof(struct scatterlist) * cmd->nsg);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* build the command
|
|
|
|
*
|
2009-05-22 21:17:49 +00:00
|
|
|
* The call to blk_queue_logical_block_size() guarantees that request
|
2005-04-16 22:20:36 +00:00
|
|
|
* is aligned, but it is given in terms of 512 byte units, always.
|
|
|
|
*/
|
2009-05-07 13:24:39 +00:00
|
|
|
block = blk_rq_pos(rq) >> lun->capacity.bshift;
|
|
|
|
nblks = blk_rq_sectors(rq) >> lun->capacity.bshift;
|
2005-07-31 05:38:30 +00:00
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
cmd->cdb[0] = (cmd->dir == UB_DIR_READ)? READ_10: WRITE_10;
|
2005-04-16 22:20:36 +00:00
|
|
|
/* 10-byte uses 4 bytes of LBA: 2147483648KB, 2097152MB, 2048GB */
|
|
|
|
cmd->cdb[2] = block >> 24;
|
|
|
|
cmd->cdb[3] = block >> 16;
|
|
|
|
cmd->cdb[4] = block >> 8;
|
|
|
|
cmd->cdb[5] = block;
|
|
|
|
cmd->cdb[7] = nblks >> 8;
|
|
|
|
cmd->cdb[8] = nblks;
|
|
|
|
cmd->cdb_len = 10;
|
|
|
|
|
2009-05-07 13:24:45 +00:00
|
|
|
cmd->len = blk_rq_bytes(rq);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
static void ub_cmd_build_packet(struct ub_dev *sc, struct ub_lun *lun,
|
|
|
|
struct ub_scsi_cmd *cmd, struct ub_request *urq)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2005-12-17 10:16:43 +00:00
|
|
|
struct request *rq = urq->rq;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2009-05-07 13:24:42 +00:00
|
|
|
if (blk_rq_bytes(rq) == 0) {
|
2005-04-16 22:20:36 +00:00
|
|
|
cmd->dir = UB_DIR_NONE;
|
|
|
|
} else {
|
|
|
|
if (rq_data_dir(rq) == WRITE)
|
|
|
|
cmd->dir = UB_DIR_WRITE;
|
|
|
|
else
|
|
|
|
cmd->dir = UB_DIR_READ;
|
|
|
|
}
|
2005-08-15 04:16:03 +00:00
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
cmd->nsg = urq->nsg;
|
|
|
|
memcpy(cmd->sgv, urq->sgv, sizeof(struct scatterlist) * cmd->nsg);
|
2005-08-15 04:16:03 +00:00
|
|
|
|
|
|
|
memcpy(&cmd->cdb, rq->cmd, rq->cmd_len);
|
|
|
|
cmd->cdb_len = rq->cmd_len;
|
|
|
|
|
2009-05-07 13:24:42 +00:00
|
|
|
cmd->len = blk_rq_bytes(rq);
|
2008-04-19 21:32:18 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* To reapply this to every URB is not as incorrect as it looks.
|
|
|
|
* In return, we avoid any complicated tracking calculations.
|
|
|
|
*/
|
|
|
|
cmd->timeo = rq->timeout;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void ub_rw_cmd_done(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
|
|
|
|
{
|
2005-05-01 23:05:40 +00:00
|
|
|
struct ub_lun *lun = cmd->lun;
|
2005-12-17 10:16:43 +00:00
|
|
|
struct ub_request *urq = cmd->back;
|
|
|
|
struct request *rq;
|
2006-09-07 23:54:22 +00:00
|
|
|
unsigned int scsi_status;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
rq = urq->rq;
|
|
|
|
|
2005-08-15 04:16:03 +00:00
|
|
|
if (cmd->error == 0) {
|
2010-08-07 16:17:56 +00:00
|
|
|
if (rq->cmd_type == REQ_TYPE_BLOCK_PC) {
|
2009-05-19 09:33:05 +00:00
|
|
|
if (cmd->act_len >= rq->resid_len)
|
|
|
|
rq->resid_len = 0;
|
|
|
|
else
|
|
|
|
rq->resid_len -= cmd->act_len;
|
2008-04-09 00:41:51 +00:00
|
|
|
scsi_status = 0;
|
|
|
|
} else {
|
|
|
|
if (cmd->act_len != cmd->len) {
|
|
|
|
scsi_status = SAM_STAT_CHECK_CONDITION;
|
|
|
|
} else {
|
|
|
|
scsi_status = 0;
|
|
|
|
}
|
2005-07-31 05:38:30 +00:00
|
|
|
}
|
2005-08-15 04:16:03 +00:00
|
|
|
} else {
|
2010-08-07 16:17:56 +00:00
|
|
|
if (rq->cmd_type == REQ_TYPE_BLOCK_PC) {
|
2005-08-15 04:16:03 +00:00
|
|
|
/* UB_SENSE_SIZE is smaller than SCSI_SENSE_BUFFERSIZE */
|
|
|
|
memcpy(rq->sense, sc->top_sense, UB_SENSE_SIZE);
|
|
|
|
rq->sense_len = UB_SENSE_SIZE;
|
|
|
|
if (sc->top_sense[0] != 0)
|
2006-09-07 23:54:22 +00:00
|
|
|
scsi_status = SAM_STAT_CHECK_CONDITION;
|
2005-08-15 04:16:03 +00:00
|
|
|
else
|
2006-09-07 23:54:22 +00:00
|
|
|
scsi_status = DID_ERROR << 16;
|
2005-12-17 10:16:43 +00:00
|
|
|
} else {
|
2008-04-19 21:35:30 +00:00
|
|
|
if (cmd->error == -EIO &&
|
|
|
|
(cmd->key == 0 ||
|
|
|
|
cmd->key == MEDIUM_ERROR ||
|
|
|
|
cmd->key == UNIT_ATTENTION)) {
|
2005-12-17 10:16:43 +00:00
|
|
|
if (ub_rw_cmd_retry(sc, lun, urq, cmd) == 0)
|
|
|
|
return;
|
|
|
|
}
|
2006-09-07 23:54:22 +00:00
|
|
|
scsi_status = SAM_STAT_CHECK_CONDITION;
|
2005-08-15 04:16:03 +00:00
|
|
|
}
|
|
|
|
}
|
2005-07-31 05:38:30 +00:00
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
urq->rq = NULL;
|
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
ub_put_cmd(lun, cmd);
|
2009-05-19 09:33:04 +00:00
|
|
|
ub_end_rq(rq, scsi_status);
|
2005-07-31 05:38:30 +00:00
|
|
|
blk_start_queue(lun->disk->queue);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2009-05-19 09:33:04 +00:00
|
|
|
static void ub_end_rq(struct request *rq, unsigned int scsi_status)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2007-12-11 22:46:47 +00:00
|
|
|
int error;
|
2006-09-07 23:54:22 +00:00
|
|
|
|
|
|
|
if (scsi_status == 0) {
|
2007-12-11 22:46:47 +00:00
|
|
|
error = 0;
|
2006-09-07 23:54:22 +00:00
|
|
|
} else {
|
2007-12-11 22:46:47 +00:00
|
|
|
error = -EIO;
|
2006-09-07 23:54:22 +00:00
|
|
|
rq->errors = scsi_status;
|
|
|
|
}
|
2009-05-19 09:33:04 +00:00
|
|
|
__blk_end_request_all(rq, error);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
static int ub_rw_cmd_retry(struct ub_dev *sc, struct ub_lun *lun,
|
|
|
|
struct ub_request *urq, struct ub_scsi_cmd *cmd)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (atomic_read(&sc->poison))
|
|
|
|
return -ENXIO;
|
|
|
|
|
2006-01-05 08:26:30 +00:00
|
|
|
ub_reset_enter(sc, urq->current_try);
|
2005-12-17 10:16:43 +00:00
|
|
|
|
|
|
|
if (urq->current_try >= 3)
|
|
|
|
return -EIO;
|
|
|
|
urq->current_try++;
|
2006-05-26 03:08:50 +00:00
|
|
|
|
|
|
|
/* Remove this if anyone complains of flooding. */
|
|
|
|
printk(KERN_DEBUG "%s: dir %c len/act %d/%d "
|
2005-12-17 10:16:43 +00:00
|
|
|
"[sense %x %02x %02x] retry %d\n",
|
|
|
|
sc->name, UB_DIR_CHAR(cmd->dir), cmd->len, cmd->act_len,
|
|
|
|
cmd->key, cmd->asc, cmd->ascq, urq->current_try);
|
|
|
|
|
|
|
|
memset(cmd, 0, sizeof(struct ub_scsi_cmd));
|
|
|
|
ub_cmd_build_block(sc, lun, cmd, urq);
|
|
|
|
|
|
|
|
cmd->state = UB_CMDST_INIT;
|
|
|
|
cmd->lun = lun;
|
|
|
|
cmd->done = ub_rw_cmd_done;
|
|
|
|
cmd->back = urq;
|
|
|
|
|
|
|
|
cmd->tag = sc->tagcnt++;
|
|
|
|
|
|
|
|
#if 0 /* Wasteful */
|
|
|
|
return ub_submit_scsi(sc, cmd);
|
|
|
|
#else
|
|
|
|
ub_cmdq_add(sc, cmd);
|
|
|
|
return 0;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Submit a regular SCSI operation (not an auto-sense).
|
|
|
|
*
|
|
|
|
* The Iron Law of Good Submit Routine is:
|
|
|
|
* Zero return - callback is done, Nonzero return - callback is not done.
|
|
|
|
* No exceptions.
|
|
|
|
*
|
|
|
|
* Host is assumed locked.
|
|
|
|
*/
|
|
|
|
static int ub_submit_scsi(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (cmd->state != UB_CMDST_INIT ||
|
|
|
|
(cmd->dir != UB_DIR_NONE && cmd->len == 0)) {
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
ub_cmdq_add(sc, cmd);
|
|
|
|
/*
|
|
|
|
* We can call ub_scsi_dispatch(sc) right away here, but it's a little
|
|
|
|
* safer to jump to a tasklet, in case upper layers do something silly.
|
|
|
|
*/
|
|
|
|
tasklet_schedule(&sc->tasklet);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Submit the first URB for the queued command.
|
|
|
|
* This function does not deal with queueing in any way.
|
|
|
|
*/
|
|
|
|
static int ub_scsi_cmd_start(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
|
|
|
|
{
|
|
|
|
struct bulk_cb_wrap *bcb;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
bcb = &sc->work_bcb;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* ``If the allocation length is eighteen or greater, and a device
|
|
|
|
* server returns less than eithteen bytes of data, the application
|
|
|
|
* client should assume that the bytes not transferred would have been
|
|
|
|
* zeroes had the device server returned those bytes.''
|
|
|
|
*
|
|
|
|
* We zero sense for all commands so that when a packet request
|
|
|
|
* fails it does not return a stale sense.
|
|
|
|
*/
|
|
|
|
memset(&sc->top_sense, 0, UB_SENSE_SIZE);
|
|
|
|
|
|
|
|
/* set up the command wrapper */
|
|
|
|
bcb->Signature = cpu_to_le32(US_BULK_CB_SIGN);
|
|
|
|
bcb->Tag = cmd->tag; /* Endianness is not important */
|
|
|
|
bcb->DataTransferLength = cpu_to_le32(cmd->len);
|
|
|
|
bcb->Flags = (cmd->dir == UB_DIR_READ) ? 0x80 : 0;
|
2005-05-01 23:05:40 +00:00
|
|
|
bcb->Lun = (cmd->lun != NULL) ? cmd->lun->num : 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
bcb->Length = cmd->cdb_len;
|
|
|
|
|
|
|
|
/* copy the command payload */
|
|
|
|
memcpy(bcb->CDB, cmd->cdb, UB_MAX_CDB_SIZE);
|
|
|
|
|
|
|
|
UB_INIT_COMPLETION(sc->work_done);
|
|
|
|
|
|
|
|
sc->last_pipe = sc->send_bulk_pipe;
|
|
|
|
usb_fill_bulk_urb(&sc->work_urb, sc->dev, sc->send_bulk_pipe,
|
|
|
|
bcb, US_BULK_CB_WRAP_LEN, ub_urb_complete, sc);
|
|
|
|
|
|
|
|
if ((rc = usb_submit_urb(&sc->work_urb, GFP_ATOMIC)) != 0) {
|
|
|
|
/* XXX Clear stalls */
|
|
|
|
ub_complete(&sc->work_done);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
sc->work_timer.expires = jiffies + UB_URB_TIMEOUT;
|
|
|
|
add_timer(&sc->work_timer);
|
|
|
|
|
|
|
|
cmd->state = UB_CMDST_CMD;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Timeout handler.
|
|
|
|
*/
|
|
|
|
static void ub_urb_timeout(unsigned long arg)
|
|
|
|
{
|
|
|
|
struct ub_dev *sc = (struct ub_dev *) arg;
|
|
|
|
unsigned long flags;
|
|
|
|
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_lock_irqsave(sc->lock, flags);
|
[PATCH] USB: ub 04 Loss of timer and a hang
If SCSI commands are submitted while other commands are still processed,
the dispatch loop turns, and we stop the work_timer. Then, if URB fails
to complete, ub hangs until the device is unplugged.
This does not happen often, becase we only allow one SCSI command per
block device, but does happen (on multi-LUN devices, for example).
The fix is to stop timer only when we actually going to change the state.
The nicest code would be to have the timer stopped in URB callback, but
this is impossible, because it can be called from inside a timer, through
the urb_unlink. Then we get BUG in timer.c:cascade(). So, we do it a
little dirtier.
Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2006-01-05 08:14:02 +00:00
|
|
|
if (!ub_is_completed(&sc->work_done))
|
|
|
|
usb_unlink_urb(&sc->work_urb);
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_unlock_irqrestore(sc->lock, flags);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Completion routine for the work URB.
|
|
|
|
*
|
|
|
|
* This can be called directly from usb_submit_urb (while we have
|
|
|
|
* the sc->lock taken) and from an interrupt (while we do NOT have
|
|
|
|
* the sc->lock taken). Therefore, bounce this off to a tasklet.
|
|
|
|
*/
|
IRQ: Maintain regs pointer globally rather than passing to IRQ handlers
Maintain a per-CPU global "struct pt_regs *" variable which can be used instead
of passing regs around manually through all ~1800 interrupt handlers in the
Linux kernel.
The regs pointer is used in few places, but it potentially costs both stack
space and code to pass it around. On the FRV arch, removing the regs parameter
from all the genirq function results in a 20% speed up of the IRQ exit path
(ie: from leaving timer_interrupt() to leaving do_IRQ()).
Where appropriate, an arch may override the generic storage facility and do
something different with the variable. On FRV, for instance, the address is
maintained in GR28 at all times inside the kernel as part of general exception
handling.
Having looked over the code, it appears that the parameter may be handed down
through up to twenty or so layers of functions. Consider a USB character
device attached to a USB hub, attached to a USB controller that posts its
interrupts through a cascaded auxiliary interrupt controller. A character
device driver may want to pass regs to the sysrq handler through the input
layer which adds another few layers of parameter passing.
I've build this code with allyesconfig for x86_64 and i386. I've runtested the
main part of the code on FRV and i386, though I can't test most of the drivers.
I've also done partial conversion for powerpc and MIPS - these at least compile
with minimal configurations.
This will affect all archs. Mostly the changes should be relatively easy.
Take do_IRQ(), store the regs pointer at the beginning, saving the old one:
struct pt_regs *old_regs = set_irq_regs(regs);
And put the old one back at the end:
set_irq_regs(old_regs);
Don't pass regs through to generic_handle_irq() or __do_IRQ().
In timer_interrupt(), this sort of change will be necessary:
- update_process_times(user_mode(regs));
- profile_tick(CPU_PROFILING, regs);
+ update_process_times(user_mode(get_irq_regs()));
+ profile_tick(CPU_PROFILING);
I'd like to move update_process_times()'s use of get_irq_regs() into itself,
except that i386, alone of the archs, uses something other than user_mode().
Some notes on the interrupt handling in the drivers:
(*) input_dev() is now gone entirely. The regs pointer is no longer stored in
the input_dev struct.
(*) finish_unlinks() in drivers/usb/host/ohci-q.c needs checking. It does
something different depending on whether it's been supplied with a regs
pointer or not.
(*) Various IRQ handler function pointers have been moved to type
irq_handler_t.
Signed-Off-By: David Howells <dhowells@redhat.com>
(cherry picked from 1b16e7ac850969f38b375e511e3fa2f474a33867 commit)
2006-10-05 13:55:46 +00:00
|
|
|
static void ub_urb_complete(struct urb *urb)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct ub_dev *sc = urb->context;
|
|
|
|
|
|
|
|
ub_complete(&sc->work_done);
|
|
|
|
tasklet_schedule(&sc->tasklet);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ub_scsi_action(unsigned long _dev)
|
|
|
|
{
|
|
|
|
struct ub_dev *sc = (struct ub_dev *) _dev;
|
|
|
|
unsigned long flags;
|
|
|
|
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_lock_irqsave(sc->lock, flags);
|
2005-04-16 22:20:36 +00:00
|
|
|
ub_scsi_dispatch(sc);
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_unlock_irqrestore(sc->lock, flags);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void ub_scsi_dispatch(struct ub_dev *sc)
|
|
|
|
{
|
|
|
|
struct ub_scsi_cmd *cmd;
|
|
|
|
int rc;
|
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
while (!sc->reset && (cmd = ub_cmdq_peek(sc)) != NULL) {
|
2005-04-16 22:20:36 +00:00
|
|
|
if (cmd->state == UB_CMDST_DONE) {
|
|
|
|
ub_cmdq_pop(sc);
|
|
|
|
(*cmd->done)(sc, cmd);
|
|
|
|
} else if (cmd->state == UB_CMDST_INIT) {
|
|
|
|
if ((rc = ub_scsi_cmd_start(sc, cmd)) == 0)
|
|
|
|
break;
|
|
|
|
cmd->error = rc;
|
|
|
|
cmd->state = UB_CMDST_DONE;
|
|
|
|
} else {
|
|
|
|
if (!ub_is_completed(&sc->work_done))
|
|
|
|
break;
|
[PATCH] USB: ub 04 Loss of timer and a hang
If SCSI commands are submitted while other commands are still processed,
the dispatch loop turns, and we stop the work_timer. Then, if URB fails
to complete, ub hangs until the device is unplugged.
This does not happen often, becase we only allow one SCSI command per
block device, but does happen (on multi-LUN devices, for example).
The fix is to stop timer only when we actually going to change the state.
The nicest code would be to have the timer stopped in URB callback, but
this is impossible, because it can be called from inside a timer, through
the urb_unlink. Then we get BUG in timer.c:cascade(). So, we do it a
little dirtier.
Signed-off-by: Pete Zaitcev <zaitcev@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2006-01-05 08:14:02 +00:00
|
|
|
del_timer(&sc->work_timer);
|
2005-04-16 22:20:36 +00:00
|
|
|
ub_scsi_urb_compl(sc, cmd);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ub_scsi_urb_compl(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
|
|
|
|
{
|
|
|
|
struct urb *urb = &sc->work_urb;
|
|
|
|
struct bulk_cs_wrap *bcs;
|
2009-04-08 17:36:28 +00:00
|
|
|
int endp;
|
2005-12-17 10:16:43 +00:00
|
|
|
int len;
|
2005-04-16 22:20:36 +00:00
|
|
|
int rc;
|
|
|
|
|
|
|
|
if (atomic_read(&sc->poison)) {
|
2005-12-17 10:16:43 +00:00
|
|
|
ub_state_done(sc, cmd, -ENODEV);
|
|
|
|
return;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2009-04-08 17:36:28 +00:00
|
|
|
endp = usb_pipeendpoint(sc->last_pipe);
|
|
|
|
if (usb_pipein(sc->last_pipe))
|
|
|
|
endp |= USB_DIR_IN;
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
if (cmd->state == UB_CMDST_CLEAR) {
|
|
|
|
if (urb->status == -EPIPE) {
|
|
|
|
/*
|
|
|
|
* STALL while clearning STALL.
|
|
|
|
* The control pipe clears itself - nothing to do.
|
|
|
|
*/
|
2005-05-01 23:05:40 +00:00
|
|
|
printk(KERN_NOTICE "%s: stall on control pipe\n",
|
|
|
|
sc->name);
|
2005-04-16 22:20:36 +00:00
|
|
|
goto Bad_End;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We ignore the result for the halt clear.
|
|
|
|
*/
|
|
|
|
|
2009-04-08 17:36:28 +00:00
|
|
|
usb_reset_endpoint(sc->dev, endp);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
ub_state_sense(sc, cmd);
|
|
|
|
|
|
|
|
} else if (cmd->state == UB_CMDST_CLR2STS) {
|
|
|
|
if (urb->status == -EPIPE) {
|
2005-05-01 23:05:40 +00:00
|
|
|
printk(KERN_NOTICE "%s: stall on control pipe\n",
|
|
|
|
sc->name);
|
2005-04-16 22:20:36 +00:00
|
|
|
goto Bad_End;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We ignore the result for the halt clear.
|
|
|
|
*/
|
|
|
|
|
2009-04-08 17:36:28 +00:00
|
|
|
usb_reset_endpoint(sc->dev, endp);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
ub_state_stat(sc, cmd);
|
|
|
|
|
2005-07-27 18:43:51 +00:00
|
|
|
} else if (cmd->state == UB_CMDST_CLRRS) {
|
|
|
|
if (urb->status == -EPIPE) {
|
|
|
|
printk(KERN_NOTICE "%s: stall on control pipe\n",
|
|
|
|
sc->name);
|
|
|
|
goto Bad_End;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We ignore the result for the halt clear.
|
|
|
|
*/
|
|
|
|
|
2009-04-08 17:36:28 +00:00
|
|
|
usb_reset_endpoint(sc->dev, endp);
|
2005-07-27 18:43:51 +00:00
|
|
|
|
|
|
|
ub_state_stat_counted(sc, cmd);
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
} else if (cmd->state == UB_CMDST_CMD) {
|
2005-12-17 10:16:43 +00:00
|
|
|
switch (urb->status) {
|
|
|
|
case 0:
|
|
|
|
break;
|
|
|
|
case -EOVERFLOW:
|
|
|
|
goto Bad_End;
|
|
|
|
case -EPIPE:
|
2005-04-16 22:20:36 +00:00
|
|
|
rc = ub_submit_clear_stall(sc, cmd, sc->last_pipe);
|
|
|
|
if (rc != 0) {
|
|
|
|
printk(KERN_NOTICE "%s: "
|
2005-05-01 23:05:40 +00:00
|
|
|
"unable to submit clear (%d)\n",
|
|
|
|
sc->name, rc);
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* This is typically ENOMEM or some other such shit.
|
|
|
|
* Retrying is pointless. Just do Bad End on it...
|
|
|
|
*/
|
2005-12-17 10:16:43 +00:00
|
|
|
ub_state_done(sc, cmd, rc);
|
|
|
|
return;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
cmd->state = UB_CMDST_CLEAR;
|
|
|
|
return;
|
2005-12-17 10:16:43 +00:00
|
|
|
case -ESHUTDOWN: /* unplug */
|
|
|
|
case -EILSEQ: /* unplug timeout on uhci */
|
|
|
|
ub_state_done(sc, cmd, -ENODEV);
|
|
|
|
return;
|
|
|
|
default:
|
2005-04-16 22:20:36 +00:00
|
|
|
goto Bad_End;
|
|
|
|
}
|
|
|
|
if (urb->actual_length != US_BULK_CB_WRAP_LEN) {
|
|
|
|
goto Bad_End;
|
|
|
|
}
|
|
|
|
|
2005-08-15 04:16:03 +00:00
|
|
|
if (cmd->dir == UB_DIR_NONE || cmd->nsg < 1) {
|
2005-04-16 22:20:36 +00:00
|
|
|
ub_state_stat(sc, cmd);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2005-08-15 04:16:03 +00:00
|
|
|
// udelay(125); // usb-storage has this
|
|
|
|
ub_data_start(sc, cmd);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
} else if (cmd->state == UB_CMDST_DATA) {
|
|
|
|
if (urb->status == -EPIPE) {
|
|
|
|
rc = ub_submit_clear_stall(sc, cmd, sc->last_pipe);
|
|
|
|
if (rc != 0) {
|
|
|
|
printk(KERN_NOTICE "%s: "
|
2005-05-01 23:05:40 +00:00
|
|
|
"unable to submit clear (%d)\n",
|
|
|
|
sc->name, rc);
|
2005-12-17 10:16:43 +00:00
|
|
|
ub_state_done(sc, cmd, rc);
|
|
|
|
return;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
cmd->state = UB_CMDST_CLR2STS;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (urb->status == -EOVERFLOW) {
|
|
|
|
/*
|
|
|
|
* A babble? Failure, but we must transfer CSW now.
|
|
|
|
*/
|
|
|
|
cmd->error = -EOVERFLOW; /* A cheap trick... */
|
2005-08-15 04:16:03 +00:00
|
|
|
ub_state_stat(sc, cmd);
|
|
|
|
return;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2005-12-17 10:16:43 +00:00
|
|
|
|
|
|
|
if (cmd->dir == UB_DIR_WRITE) {
|
|
|
|
/*
|
|
|
|
* Do not continue writes in case of a failure.
|
|
|
|
* Doing so would cause sectors to be mixed up,
|
|
|
|
* which is worse than sectors lost.
|
|
|
|
*
|
|
|
|
* We must try to read the CSW, or many devices
|
|
|
|
* get confused.
|
|
|
|
*/
|
|
|
|
len = urb->actual_length;
|
|
|
|
if (urb->status != 0 ||
|
|
|
|
len != cmd->sgv[cmd->current_sg].length) {
|
|
|
|
cmd->act_len += len;
|
|
|
|
|
|
|
|
cmd->error = -EIO;
|
|
|
|
ub_state_stat(sc, cmd);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* If an error occurs on read, we record it, and
|
|
|
|
* continue to fetch data in order to avoid bubble.
|
|
|
|
*
|
|
|
|
* As a small shortcut, we stop if we detect that
|
|
|
|
* a CSW mixed into data.
|
|
|
|
*/
|
|
|
|
if (urb->status != 0)
|
|
|
|
cmd->error = -EIO;
|
|
|
|
|
|
|
|
len = urb->actual_length;
|
|
|
|
if (urb->status != 0 ||
|
|
|
|
len != cmd->sgv[cmd->current_sg].length) {
|
|
|
|
if ((len & 0x1FF) == US_BULK_CS_WRAP_LEN)
|
|
|
|
goto Bad_End;
|
|
|
|
}
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-08-15 04:16:03 +00:00
|
|
|
cmd->act_len += urb->actual_length;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-08-15 04:16:03 +00:00
|
|
|
if (++cmd->current_sg < cmd->nsg) {
|
|
|
|
ub_data_start(sc, cmd);
|
|
|
|
return;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
ub_state_stat(sc, cmd);
|
|
|
|
|
|
|
|
} else if (cmd->state == UB_CMDST_STAT) {
|
|
|
|
if (urb->status == -EPIPE) {
|
|
|
|
rc = ub_submit_clear_stall(sc, cmd, sc->last_pipe);
|
|
|
|
if (rc != 0) {
|
|
|
|
printk(KERN_NOTICE "%s: "
|
2005-05-01 23:05:40 +00:00
|
|
|
"unable to submit clear (%d)\n",
|
|
|
|
sc->name, rc);
|
2005-12-17 10:16:43 +00:00
|
|
|
ub_state_done(sc, cmd, rc);
|
|
|
|
return;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
2005-07-27 18:43:51 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Having a stall when getting CSW is an error, so
|
|
|
|
* make sure uppper levels are not oblivious to it.
|
|
|
|
*/
|
|
|
|
cmd->error = -EIO; /* A cheap trick... */
|
|
|
|
|
|
|
|
cmd->state = UB_CMDST_CLRRS;
|
2005-04-16 22:20:36 +00:00
|
|
|
return;
|
|
|
|
}
|
2005-12-17 10:16:43 +00:00
|
|
|
|
|
|
|
/* Catch everything, including -EOVERFLOW and other nasties. */
|
2005-04-16 22:20:36 +00:00
|
|
|
if (urb->status != 0)
|
|
|
|
goto Bad_End;
|
|
|
|
|
|
|
|
if (urb->actual_length == 0) {
|
2005-07-27 18:43:51 +00:00
|
|
|
ub_state_stat_counted(sc, cmd);
|
2005-04-16 22:20:36 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check the returned Bulk protocol status.
|
2005-07-27 18:43:51 +00:00
|
|
|
* The status block has to be validated first.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
bcs = &sc->work_bcs;
|
2005-07-27 18:43:51 +00:00
|
|
|
|
|
|
|
if (sc->signature == cpu_to_le32(0)) {
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
2005-07-27 18:43:51 +00:00
|
|
|
* This is the first reply, so do not perform the check.
|
|
|
|
* Instead, remember the signature the device uses
|
|
|
|
* for future checks. But do not allow a nul.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
2005-07-27 18:43:51 +00:00
|
|
|
sc->signature = bcs->Signature;
|
|
|
|
if (sc->signature == cpu_to_le32(0)) {
|
|
|
|
ub_state_stat_counted(sc, cmd);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (bcs->Signature != sc->signature) {
|
|
|
|
ub_state_stat_counted(sc, cmd);
|
|
|
|
return;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (bcs->Tag != cmd->tag) {
|
|
|
|
/*
|
|
|
|
* This usually happens when we disagree with the
|
|
|
|
* device's microcode about something. For instance,
|
|
|
|
* a few of them throw this after timeouts. They buffer
|
|
|
|
* commands and reply at commands we timed out before.
|
|
|
|
* Without flushing these replies we loop forever.
|
|
|
|
*/
|
2005-07-27 18:43:51 +00:00
|
|
|
ub_state_stat_counted(sc, cmd);
|
2005-04-16 22:20:36 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2008-04-19 21:42:49 +00:00
|
|
|
if (!sc->bad_resid) {
|
|
|
|
len = le32_to_cpu(bcs->Residue);
|
|
|
|
if (len != cmd->len - cmd->act_len) {
|
|
|
|
/*
|
|
|
|
* Only start ignoring if this cmd ended well.
|
|
|
|
*/
|
|
|
|
if (cmd->len == cmd->act_len) {
|
|
|
|
printk(KERN_NOTICE "%s: "
|
|
|
|
"bad residual %d of %d, ignoring\n",
|
|
|
|
sc->name, len, cmd->len);
|
|
|
|
sc->bad_resid = 1;
|
|
|
|
}
|
|
|
|
}
|
2005-07-27 18:43:51 +00:00
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
switch (bcs->Status) {
|
|
|
|
case US_BULK_STAT_OK:
|
|
|
|
break;
|
|
|
|
case US_BULK_STAT_FAIL:
|
|
|
|
ub_state_sense(sc, cmd);
|
|
|
|
return;
|
|
|
|
case US_BULK_STAT_PHASE:
|
|
|
|
goto Bad_End;
|
|
|
|
default:
|
|
|
|
printk(KERN_INFO "%s: unknown CSW status 0x%x\n",
|
|
|
|
sc->name, bcs->Status);
|
2005-12-17 10:16:43 +00:00
|
|
|
ub_state_done(sc, cmd, -EINVAL);
|
|
|
|
return;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Not zeroing error to preserve a babble indicator */
|
2005-07-27 18:43:51 +00:00
|
|
|
if (cmd->error != 0) {
|
|
|
|
ub_state_sense(sc, cmd);
|
|
|
|
return;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
cmd->state = UB_CMDST_DONE;
|
|
|
|
ub_cmdq_pop(sc);
|
|
|
|
(*cmd->done)(sc, cmd);
|
|
|
|
|
|
|
|
} else if (cmd->state == UB_CMDST_SENSE) {
|
|
|
|
ub_state_done(sc, cmd, -EIO);
|
|
|
|
|
|
|
|
} else {
|
2008-04-19 21:45:24 +00:00
|
|
|
printk(KERN_WARNING "%s: wrong command state %d\n",
|
2005-05-01 23:05:40 +00:00
|
|
|
sc->name, cmd->state);
|
2005-12-17 10:16:43 +00:00
|
|
|
ub_state_done(sc, cmd, -EINVAL);
|
|
|
|
return;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
return;
|
|
|
|
|
|
|
|
Bad_End: /* Little Excel is dead */
|
|
|
|
ub_state_done(sc, cmd, -EIO);
|
|
|
|
}
|
|
|
|
|
2005-08-15 04:16:03 +00:00
|
|
|
/*
|
|
|
|
* Factorization helper for the command state machine:
|
|
|
|
* Initiate a data segment transfer.
|
|
|
|
*/
|
|
|
|
static void ub_data_start(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
|
|
|
|
{
|
|
|
|
struct scatterlist *sg = &cmd->sgv[cmd->current_sg];
|
|
|
|
int pipe;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
UB_INIT_COMPLETION(sc->work_done);
|
|
|
|
|
|
|
|
if (cmd->dir == UB_DIR_READ)
|
|
|
|
pipe = sc->recv_bulk_pipe;
|
|
|
|
else
|
|
|
|
pipe = sc->send_bulk_pipe;
|
|
|
|
sc->last_pipe = pipe;
|
2007-10-22 19:19:53 +00:00
|
|
|
usb_fill_bulk_urb(&sc->work_urb, sc->dev, pipe, sg_virt(sg),
|
|
|
|
sg->length, ub_urb_complete, sc);
|
2005-08-15 04:16:03 +00:00
|
|
|
|
|
|
|
if ((rc = usb_submit_urb(&sc->work_urb, GFP_ATOMIC)) != 0) {
|
|
|
|
/* XXX Clear stalls */
|
|
|
|
ub_complete(&sc->work_done);
|
|
|
|
ub_state_done(sc, cmd, rc);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2008-04-19 21:32:18 +00:00
|
|
|
if (cmd->timeo)
|
|
|
|
sc->work_timer.expires = jiffies + cmd->timeo;
|
|
|
|
else
|
|
|
|
sc->work_timer.expires = jiffies + UB_DATA_TIMEOUT;
|
2005-08-15 04:16:03 +00:00
|
|
|
add_timer(&sc->work_timer);
|
|
|
|
|
|
|
|
cmd->state = UB_CMDST_DATA;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Factorization helper for the command state machine:
|
|
|
|
* Finish the command.
|
|
|
|
*/
|
|
|
|
static void ub_state_done(struct ub_dev *sc, struct ub_scsi_cmd *cmd, int rc)
|
|
|
|
{
|
|
|
|
|
|
|
|
cmd->error = rc;
|
|
|
|
cmd->state = UB_CMDST_DONE;
|
|
|
|
ub_cmdq_pop(sc);
|
|
|
|
(*cmd->done)(sc, cmd);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Factorization helper for the command state machine:
|
|
|
|
* Submit a CSW read.
|
|
|
|
*/
|
2005-07-27 18:43:51 +00:00
|
|
|
static int __ub_state_stat(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
UB_INIT_COMPLETION(sc->work_done);
|
|
|
|
|
|
|
|
sc->last_pipe = sc->recv_bulk_pipe;
|
|
|
|
usb_fill_bulk_urb(&sc->work_urb, sc->dev, sc->recv_bulk_pipe,
|
|
|
|
&sc->work_bcs, US_BULK_CS_WRAP_LEN, ub_urb_complete, sc);
|
|
|
|
|
|
|
|
if ((rc = usb_submit_urb(&sc->work_urb, GFP_ATOMIC)) != 0) {
|
|
|
|
/* XXX Clear stalls */
|
|
|
|
ub_complete(&sc->work_done);
|
|
|
|
ub_state_done(sc, cmd, rc);
|
2005-07-27 18:43:51 +00:00
|
|
|
return -1;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2008-04-19 21:32:18 +00:00
|
|
|
if (cmd->timeo)
|
|
|
|
sc->work_timer.expires = jiffies + cmd->timeo;
|
|
|
|
else
|
|
|
|
sc->work_timer.expires = jiffies + UB_STAT_TIMEOUT;
|
2005-04-16 22:20:36 +00:00
|
|
|
add_timer(&sc->work_timer);
|
2005-07-27 18:43:51 +00:00
|
|
|
return 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Factorization helper for the command state machine:
|
|
|
|
* Submit a CSW read and go to STAT state.
|
|
|
|
*/
|
|
|
|
static void ub_state_stat(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
|
|
|
|
{
|
2005-07-27 18:43:51 +00:00
|
|
|
|
|
|
|
if (__ub_state_stat(sc, cmd) != 0)
|
|
|
|
return;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
cmd->stat_count = 0;
|
|
|
|
cmd->state = UB_CMDST_STAT;
|
2005-07-27 18:43:51 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Factorization helper for the command state machine:
|
|
|
|
* Submit a CSW read and go to STAT state with counter (along [C] path).
|
|
|
|
*/
|
|
|
|
static void ub_state_stat_counted(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
|
|
|
|
{
|
|
|
|
|
|
|
|
if (++cmd->stat_count >= 4) {
|
|
|
|
ub_state_sense(sc, cmd);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (__ub_state_stat(sc, cmd) != 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
cmd->state = UB_CMDST_STAT;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Factorization helper for the command state machine:
|
|
|
|
* Submit a REQUEST SENSE and go to SENSE state.
|
|
|
|
*/
|
|
|
|
static void ub_state_sense(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
|
|
|
|
{
|
|
|
|
struct ub_scsi_cmd *scmd;
|
2005-08-15 04:16:03 +00:00
|
|
|
struct scatterlist *sg;
|
2005-04-16 22:20:36 +00:00
|
|
|
int rc;
|
|
|
|
|
|
|
|
if (cmd->cdb[0] == REQUEST_SENSE) {
|
|
|
|
rc = -EPIPE;
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
|
|
|
|
scmd = &sc->top_rqs_cmd;
|
2005-08-15 04:16:03 +00:00
|
|
|
memset(scmd, 0, sizeof(struct ub_scsi_cmd));
|
2005-04-16 22:20:36 +00:00
|
|
|
scmd->cdb[0] = REQUEST_SENSE;
|
|
|
|
scmd->cdb[4] = UB_SENSE_SIZE;
|
|
|
|
scmd->cdb_len = 6;
|
|
|
|
scmd->dir = UB_DIR_READ;
|
|
|
|
scmd->state = UB_CMDST_INIT;
|
2005-08-15 04:16:03 +00:00
|
|
|
scmd->nsg = 1;
|
|
|
|
sg = &scmd->sgv[0];
|
2007-10-25 07:17:03 +00:00
|
|
|
sg_init_table(sg, UB_MAX_REQ_SG);
|
2007-10-24 09:20:47 +00:00
|
|
|
sg_set_page(sg, virt_to_page(sc->top_sense), UB_SENSE_SIZE,
|
|
|
|
(unsigned long)sc->top_sense & (PAGE_SIZE-1));
|
2005-04-16 22:20:36 +00:00
|
|
|
scmd->len = UB_SENSE_SIZE;
|
2005-05-01 23:05:40 +00:00
|
|
|
scmd->lun = cmd->lun;
|
2005-04-16 22:20:36 +00:00
|
|
|
scmd->done = ub_top_sense_done;
|
|
|
|
scmd->back = cmd;
|
|
|
|
|
|
|
|
scmd->tag = sc->tagcnt++;
|
|
|
|
|
|
|
|
cmd->state = UB_CMDST_SENSE;
|
|
|
|
|
|
|
|
ub_cmdq_insert(sc, scmd);
|
|
|
|
return;
|
|
|
|
|
|
|
|
error:
|
|
|
|
ub_state_done(sc, cmd, rc);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* A helper for the command's state machine:
|
|
|
|
* Submit a stall clear.
|
|
|
|
*/
|
|
|
|
static int ub_submit_clear_stall(struct ub_dev *sc, struct ub_scsi_cmd *cmd,
|
|
|
|
int stalled_pipe)
|
|
|
|
{
|
|
|
|
int endp;
|
|
|
|
struct usb_ctrlrequest *cr;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
endp = usb_pipeendpoint(stalled_pipe);
|
|
|
|
if (usb_pipein (stalled_pipe))
|
|
|
|
endp |= USB_DIR_IN;
|
|
|
|
|
|
|
|
cr = &sc->work_cr;
|
|
|
|
cr->bRequestType = USB_RECIP_ENDPOINT;
|
|
|
|
cr->bRequest = USB_REQ_CLEAR_FEATURE;
|
|
|
|
cr->wValue = cpu_to_le16(USB_ENDPOINT_HALT);
|
|
|
|
cr->wIndex = cpu_to_le16(endp);
|
|
|
|
cr->wLength = cpu_to_le16(0);
|
|
|
|
|
|
|
|
UB_INIT_COMPLETION(sc->work_done);
|
|
|
|
|
|
|
|
usb_fill_control_urb(&sc->work_urb, sc->dev, sc->send_ctrl_pipe,
|
|
|
|
(unsigned char*) cr, NULL, 0, ub_urb_complete, sc);
|
|
|
|
|
|
|
|
if ((rc = usb_submit_urb(&sc->work_urb, GFP_ATOMIC)) != 0) {
|
|
|
|
ub_complete(&sc->work_done);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
sc->work_timer.expires = jiffies + UB_CTRL_TIMEOUT;
|
|
|
|
add_timer(&sc->work_timer);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
*/
|
|
|
|
static void ub_top_sense_done(struct ub_dev *sc, struct ub_scsi_cmd *scmd)
|
|
|
|
{
|
2005-08-15 04:16:03 +00:00
|
|
|
unsigned char *sense = sc->top_sense;
|
2005-04-16 22:20:36 +00:00
|
|
|
struct ub_scsi_cmd *cmd;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Find the command which triggered the unit attention or a check,
|
|
|
|
* save the sense into it, and advance its state machine.
|
|
|
|
*/
|
|
|
|
if ((cmd = ub_cmdq_peek(sc)) == NULL) {
|
|
|
|
printk(KERN_WARNING "%s: sense done while idle\n", sc->name);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (cmd != scmd->back) {
|
|
|
|
printk(KERN_WARNING "%s: "
|
2005-05-01 23:05:40 +00:00
|
|
|
"sense done for wrong command 0x%x\n",
|
|
|
|
sc->name, cmd->tag);
|
2005-04-16 22:20:36 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (cmd->state != UB_CMDST_SENSE) {
|
2008-04-19 21:45:24 +00:00
|
|
|
printk(KERN_WARNING "%s: sense done with bad cmd state %d\n",
|
2005-05-01 23:05:40 +00:00
|
|
|
sc->name, cmd->state);
|
2005-04-16 22:20:36 +00:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2006-03-03 00:42:59 +00:00
|
|
|
/*
|
|
|
|
* Ignoring scmd->act_len, because the buffer was pre-zeroed.
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
cmd->key = sense[2] & 0x0F;
|
|
|
|
cmd->asc = sense[12];
|
|
|
|
cmd->ascq = sense[13];
|
|
|
|
|
|
|
|
ub_scsi_urb_compl(sc, cmd);
|
|
|
|
}
|
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
/*
|
|
|
|
* Reset management
|
|
|
|
*/
|
|
|
|
|
2006-01-05 08:26:30 +00:00
|
|
|
static void ub_reset_enter(struct ub_dev *sc, int try)
|
2005-12-17 10:16:43 +00:00
|
|
|
{
|
|
|
|
|
|
|
|
if (sc->reset) {
|
|
|
|
/* This happens often on multi-LUN devices. */
|
|
|
|
return;
|
|
|
|
}
|
2006-01-05 08:26:30 +00:00
|
|
|
sc->reset = try + 1;
|
2005-12-17 10:16:43 +00:00
|
|
|
|
|
|
|
#if 0 /* Not needed because the disconnect waits for us. */
|
|
|
|
unsigned long flags;
|
|
|
|
spin_lock_irqsave(&ub_lock, flags);
|
|
|
|
sc->openc++;
|
|
|
|
spin_unlock_irqrestore(&ub_lock, flags);
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#if 0 /* We let them stop themselves. */
|
|
|
|
struct ub_lun *lun;
|
2007-07-09 19:03:07 +00:00
|
|
|
list_for_each_entry(lun, &sc->luns, link) {
|
2005-12-17 10:16:43 +00:00
|
|
|
blk_stop_queue(lun->disk->queue);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
schedule_work(&sc->reset_work);
|
|
|
|
}
|
|
|
|
|
2006-11-22 14:57:56 +00:00
|
|
|
static void ub_reset_task(struct work_struct *work)
|
2005-12-17 10:16:43 +00:00
|
|
|
{
|
2006-11-22 14:57:56 +00:00
|
|
|
struct ub_dev *sc = container_of(work, struct ub_dev, reset_work);
|
2005-12-17 10:16:43 +00:00
|
|
|
unsigned long flags;
|
|
|
|
struct ub_lun *lun;
|
2008-11-04 16:29:27 +00:00
|
|
|
int rc;
|
2005-12-17 10:16:43 +00:00
|
|
|
|
|
|
|
if (!sc->reset) {
|
|
|
|
printk(KERN_WARNING "%s: Running reset unrequested\n",
|
|
|
|
sc->name);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (atomic_read(&sc->poison)) {
|
2006-05-26 03:08:50 +00:00
|
|
|
;
|
2006-01-05 08:26:30 +00:00
|
|
|
} else if ((sc->reset & 1) == 0) {
|
|
|
|
ub_sync_reset(sc);
|
|
|
|
msleep(700); /* usb-storage sleeps 6s (!) */
|
|
|
|
ub_probe_clear_stall(sc, sc->recv_bulk_pipe);
|
|
|
|
ub_probe_clear_stall(sc, sc->send_bulk_pipe);
|
2005-12-17 10:16:43 +00:00
|
|
|
} else if (sc->dev->actconfig->desc.bNumInterfaces != 1) {
|
2006-05-26 03:08:50 +00:00
|
|
|
;
|
2005-12-17 10:16:43 +00:00
|
|
|
} else {
|
2008-11-04 16:29:27 +00:00
|
|
|
rc = usb_lock_device_for_reset(sc->dev, sc->intf);
|
|
|
|
if (rc < 0) {
|
2005-12-17 10:16:43 +00:00
|
|
|
printk(KERN_NOTICE
|
|
|
|
"%s: usb_lock_device_for_reset failed (%d)\n",
|
2008-11-04 16:29:27 +00:00
|
|
|
sc->name, rc);
|
2005-12-17 10:16:43 +00:00
|
|
|
} else {
|
|
|
|
rc = usb_reset_device(sc->dev);
|
|
|
|
if (rc < 0) {
|
|
|
|
printk(KERN_NOTICE "%s: "
|
|
|
|
"usb_lock_device_for_reset failed (%d)\n",
|
|
|
|
sc->name, rc);
|
|
|
|
}
|
2008-11-04 16:29:27 +00:00
|
|
|
usb_unlock_device(sc->dev);
|
2005-12-17 10:16:43 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* In theory, no commands can be running while reset is active,
|
|
|
|
* so nobody can ask for another reset, and so we do not need any
|
|
|
|
* queues of resets or anything. We do need a spinlock though,
|
|
|
|
* to interact with block layer.
|
|
|
|
*/
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_lock_irqsave(sc->lock, flags);
|
2005-12-17 10:16:43 +00:00
|
|
|
sc->reset = 0;
|
|
|
|
tasklet_schedule(&sc->tasklet);
|
2007-07-09 19:03:07 +00:00
|
|
|
list_for_each_entry(lun, &sc->luns, link) {
|
2005-12-17 10:16:43 +00:00
|
|
|
blk_start_queue(lun->disk->queue);
|
|
|
|
}
|
|
|
|
wake_up(&sc->reset_wait);
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_unlock_irqrestore(sc->lock, flags);
|
2005-12-17 10:16:43 +00:00
|
|
|
}
|
|
|
|
|
2008-11-11 04:11:11 +00:00
|
|
|
/*
|
|
|
|
* XXX Reset brackets are too much hassle to implement, so just stub them
|
|
|
|
* in order to prevent forced unbinding (which deadlocks solid when our
|
|
|
|
* ->disconnect method waits for the reset to complete and this kills keventd).
|
|
|
|
*
|
|
|
|
* XXX Tell Alan to move usb_unlock_device inside of usb_reset_device,
|
|
|
|
* or else the post_reset is invoked, and restats I/O on a locked device.
|
|
|
|
*/
|
|
|
|
static int ub_pre_reset(struct usb_interface *iface) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ub_post_reset(struct usb_interface *iface) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* This is called from a process context.
|
|
|
|
*/
|
2005-05-01 23:05:40 +00:00
|
|
|
static void ub_revalidate(struct ub_dev *sc, struct ub_lun *lun)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
lun->readonly = 0; /* XXX Query this from the device */
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
lun->capacity.nsec = 0;
|
|
|
|
lun->capacity.bsize = 512;
|
|
|
|
lun->capacity.bshift = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
if (ub_sync_tur(sc, lun) != 0)
|
2005-04-16 22:20:36 +00:00
|
|
|
return; /* Not ready */
|
2005-05-01 23:05:40 +00:00
|
|
|
lun->changed = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
if (ub_sync_read_cap(sc, lun, &lun->capacity) != 0) {
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* The retry here means something is wrong, either with the
|
|
|
|
* device, with the transport, or with our code.
|
|
|
|
* We keep this because sd.c has retries for capacity.
|
|
|
|
*/
|
2005-05-01 23:05:40 +00:00
|
|
|
if (ub_sync_read_cap(sc, lun, &lun->capacity) != 0) {
|
|
|
|
lun->capacity.nsec = 0;
|
|
|
|
lun->capacity.bsize = 512;
|
|
|
|
lun->capacity.bshift = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The open funcion.
|
|
|
|
* This is mostly needed to keep refcounting, but also to support
|
|
|
|
* media checks on removable media drives.
|
|
|
|
*/
|
2008-03-02 15:21:43 +00:00
|
|
|
static int ub_bd_open(struct block_device *bdev, fmode_t mode)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2008-03-02 15:21:43 +00:00
|
|
|
struct ub_lun *lun = bdev->bd_disk->private_data;
|
2006-04-29 03:45:49 +00:00
|
|
|
struct ub_dev *sc = lun->udev;
|
2005-04-16 22:20:36 +00:00
|
|
|
unsigned long flags;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&ub_lock, flags);
|
|
|
|
if (atomic_read(&sc->poison)) {
|
|
|
|
spin_unlock_irqrestore(&ub_lock, flags);
|
|
|
|
return -ENXIO;
|
|
|
|
}
|
|
|
|
sc->openc++;
|
|
|
|
spin_unlock_irqrestore(&ub_lock, flags);
|
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
if (lun->removable || lun->readonly)
|
2008-03-02 15:21:43 +00:00
|
|
|
check_disk_change(bdev);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The sd.c considers ->media_present and ->changed not equivalent,
|
|
|
|
* under some pretty murky conditions (a failure of READ CAPACITY).
|
|
|
|
* We may need it one day.
|
|
|
|
*/
|
2008-03-02 15:21:43 +00:00
|
|
|
if (lun->removable && lun->changed && !(mode & FMODE_NDELAY)) {
|
2005-04-16 22:20:36 +00:00
|
|
|
rc = -ENOMEDIUM;
|
|
|
|
goto err_open;
|
|
|
|
}
|
|
|
|
|
2008-03-02 15:21:43 +00:00
|
|
|
if (lun->readonly && (mode & FMODE_WRITE)) {
|
2005-04-16 22:20:36 +00:00
|
|
|
rc = -EROFS;
|
|
|
|
goto err_open;
|
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_open:
|
|
|
|
ub_put(sc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2010-08-07 16:25:34 +00:00
|
|
|
static int ub_bd_unlocked_open(struct block_device *bdev, fmode_t mode)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
2010-06-02 12:28:52 +00:00
|
|
|
mutex_lock(&ub_mutex);
|
2010-08-07 16:25:34 +00:00
|
|
|
ret = ub_bd_open(bdev, mode);
|
2010-06-02 12:28:52 +00:00
|
|
|
mutex_unlock(&ub_mutex);
|
2010-08-07 16:25:34 +00:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
*/
|
2008-03-02 15:21:43 +00:00
|
|
|
static int ub_bd_release(struct gendisk *disk, fmode_t mode)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2005-05-01 23:05:40 +00:00
|
|
|
struct ub_lun *lun = disk->private_data;
|
|
|
|
struct ub_dev *sc = lun->udev;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2010-06-02 12:28:52 +00:00
|
|
|
mutex_lock(&ub_mutex);
|
2005-04-16 22:20:36 +00:00
|
|
|
ub_put(sc);
|
2010-06-02 12:28:52 +00:00
|
|
|
mutex_unlock(&ub_mutex);
|
2010-08-07 16:25:34 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The ioctl interface.
|
|
|
|
*/
|
2008-03-02 15:21:43 +00:00
|
|
|
static int ub_bd_ioctl(struct block_device *bdev, fmode_t mode,
|
2005-04-16 22:20:36 +00:00
|
|
|
unsigned int cmd, unsigned long arg)
|
|
|
|
{
|
2008-03-02 15:21:43 +00:00
|
|
|
struct gendisk *disk = bdev->bd_disk;
|
2005-04-16 22:20:36 +00:00
|
|
|
void __user *usermem = (void __user *) arg;
|
2010-07-08 08:18:46 +00:00
|
|
|
int ret;
|
|
|
|
|
2010-06-02 12:28:52 +00:00
|
|
|
mutex_lock(&ub_mutex);
|
2010-07-08 08:18:46 +00:00
|
|
|
ret = scsi_cmd_ioctl(disk->queue, disk, mode, cmd, usermem);
|
2010-06-02 12:28:52 +00:00
|
|
|
mutex_unlock(&ub_mutex);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2010-07-08 08:18:46 +00:00
|
|
|
return ret;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2008-04-19 21:45:24 +00:00
|
|
|
* This is called by check_disk_change if we reported a media change.
|
2005-04-16 22:20:36 +00:00
|
|
|
* The main onjective here is to discover the features of the media such as
|
|
|
|
* the capacity, read-only status, etc. USB storage generally does not
|
|
|
|
* need to be spun up, but if we needed it, this would be the place.
|
|
|
|
*
|
|
|
|
* This call can sleep.
|
|
|
|
*
|
|
|
|
* The return code is not used.
|
|
|
|
*/
|
|
|
|
static int ub_bd_revalidate(struct gendisk *disk)
|
|
|
|
{
|
2005-05-01 23:05:40 +00:00
|
|
|
struct ub_lun *lun = disk->private_data;
|
|
|
|
|
|
|
|
ub_revalidate(lun->udev, lun);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/* XXX Support sector size switching like in sr.c */
|
2009-05-22 21:17:49 +00:00
|
|
|
blk_queue_logical_block_size(disk->queue, lun->capacity.bsize);
|
2005-05-01 23:05:40 +00:00
|
|
|
set_capacity(disk, lun->capacity.nsec);
|
|
|
|
// set_disk_ro(sdkp->disk, lun->readonly);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The check is called by the block layer to verify if the media
|
|
|
|
* is still available. It is supposed to be harmless, lightweight and
|
|
|
|
* non-intrusive in case the media was not changed.
|
|
|
|
*
|
|
|
|
* This call can sleep.
|
|
|
|
*
|
|
|
|
* The return code is bool!
|
|
|
|
*/
|
2011-03-09 18:54:28 +00:00
|
|
|
static unsigned int ub_bd_check_events(struct gendisk *disk,
|
|
|
|
unsigned int clearing)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2005-05-01 23:05:40 +00:00
|
|
|
struct ub_lun *lun = disk->private_data;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
if (!lun->removable)
|
2005-04-16 22:20:36 +00:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We clean checks always after every command, so this is not
|
|
|
|
* as dangerous as it looks. If the TEST_UNIT_READY fails here,
|
|
|
|
* the device is actually not ready with operator or software
|
|
|
|
* intervention required. One dangerous item might be a drive which
|
|
|
|
* spins itself down, and come the time to write dirty pages, this
|
|
|
|
* will fail, then block layer discards the data. Since we never
|
|
|
|
* spin drives up, such devices simply cannot be used with ub anyway.
|
|
|
|
*/
|
2005-05-01 23:05:40 +00:00
|
|
|
if (ub_sync_tur(lun->udev, lun) != 0) {
|
|
|
|
lun->changed = 1;
|
2011-03-09 18:54:28 +00:00
|
|
|
return DISK_EVENT_MEDIA_CHANGE;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2011-03-09 18:54:28 +00:00
|
|
|
return lun->changed ? DISK_EVENT_MEDIA_CHANGE : 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2009-09-22 00:01:13 +00:00
|
|
|
static const struct block_device_operations ub_bd_fops = {
|
2005-04-16 22:20:36 +00:00
|
|
|
.owner = THIS_MODULE,
|
2010-08-07 16:25:34 +00:00
|
|
|
.open = ub_bd_unlocked_open,
|
2008-03-02 15:21:43 +00:00
|
|
|
.release = ub_bd_release,
|
2010-07-08 08:18:46 +00:00
|
|
|
.ioctl = ub_bd_ioctl,
|
2011-03-09 18:54:28 +00:00
|
|
|
.check_events = ub_bd_check_events,
|
2005-04-16 22:20:36 +00:00
|
|
|
.revalidate_disk = ub_bd_revalidate,
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Common ->done routine for commands executed synchronously.
|
|
|
|
*/
|
|
|
|
static void ub_probe_done(struct ub_dev *sc, struct ub_scsi_cmd *cmd)
|
|
|
|
{
|
|
|
|
struct completion *cop = cmd->back;
|
|
|
|
complete(cop);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Test if the device has a check condition on it, synchronously.
|
|
|
|
*/
|
2005-05-01 23:05:40 +00:00
|
|
|
static int ub_sync_tur(struct ub_dev *sc, struct ub_lun *lun)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct ub_scsi_cmd *cmd;
|
|
|
|
enum { ALLOC_SIZE = sizeof(struct ub_scsi_cmd) };
|
|
|
|
unsigned long flags;
|
|
|
|
struct completion compl;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
init_completion(&compl);
|
|
|
|
|
|
|
|
rc = -ENOMEM;
|
2006-02-14 04:35:57 +00:00
|
|
|
if ((cmd = kzalloc(ALLOC_SIZE, GFP_KERNEL)) == NULL)
|
2005-04-16 22:20:36 +00:00
|
|
|
goto err_alloc;
|
|
|
|
|
|
|
|
cmd->cdb[0] = TEST_UNIT_READY;
|
|
|
|
cmd->cdb_len = 6;
|
|
|
|
cmd->dir = UB_DIR_NONE;
|
|
|
|
cmd->state = UB_CMDST_INIT;
|
2005-05-01 23:05:40 +00:00
|
|
|
cmd->lun = lun; /* This may be NULL, but that's ok */
|
2005-04-16 22:20:36 +00:00
|
|
|
cmd->done = ub_probe_done;
|
|
|
|
cmd->back = &compl;
|
|
|
|
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_lock_irqsave(sc->lock, flags);
|
2005-04-16 22:20:36 +00:00
|
|
|
cmd->tag = sc->tagcnt++;
|
|
|
|
|
|
|
|
rc = ub_submit_scsi(sc, cmd);
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_unlock_irqrestore(sc->lock, flags);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2006-05-26 03:08:50 +00:00
|
|
|
if (rc != 0)
|
2005-04-16 22:20:36 +00:00
|
|
|
goto err_submit;
|
|
|
|
|
|
|
|
wait_for_completion(&compl);
|
|
|
|
|
|
|
|
rc = cmd->error;
|
|
|
|
|
|
|
|
if (rc == -EIO && cmd->key != 0) /* Retries for benh's key */
|
|
|
|
rc = cmd->key;
|
|
|
|
|
|
|
|
err_submit:
|
|
|
|
kfree(cmd);
|
|
|
|
err_alloc:
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Read the SCSI capacity synchronously (for probing).
|
|
|
|
*/
|
2005-05-01 23:05:40 +00:00
|
|
|
static int ub_sync_read_cap(struct ub_dev *sc, struct ub_lun *lun,
|
|
|
|
struct ub_capacity *ret)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct ub_scsi_cmd *cmd;
|
2005-08-15 04:16:03 +00:00
|
|
|
struct scatterlist *sg;
|
2005-04-16 22:20:36 +00:00
|
|
|
char *p;
|
|
|
|
enum { ALLOC_SIZE = sizeof(struct ub_scsi_cmd) + 8 };
|
|
|
|
unsigned long flags;
|
|
|
|
unsigned int bsize, shift;
|
|
|
|
unsigned long nsec;
|
|
|
|
struct completion compl;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
init_completion(&compl);
|
|
|
|
|
|
|
|
rc = -ENOMEM;
|
2006-02-14 04:35:57 +00:00
|
|
|
if ((cmd = kzalloc(ALLOC_SIZE, GFP_KERNEL)) == NULL)
|
2005-04-16 22:20:36 +00:00
|
|
|
goto err_alloc;
|
|
|
|
p = (char *)cmd + sizeof(struct ub_scsi_cmd);
|
|
|
|
|
|
|
|
cmd->cdb[0] = 0x25;
|
|
|
|
cmd->cdb_len = 10;
|
|
|
|
cmd->dir = UB_DIR_READ;
|
|
|
|
cmd->state = UB_CMDST_INIT;
|
2005-08-15 04:16:03 +00:00
|
|
|
cmd->nsg = 1;
|
|
|
|
sg = &cmd->sgv[0];
|
2007-10-25 07:17:03 +00:00
|
|
|
sg_init_table(sg, UB_MAX_REQ_SG);
|
2007-10-24 09:20:47 +00:00
|
|
|
sg_set_page(sg, virt_to_page(p), 8, (unsigned long)p & (PAGE_SIZE-1));
|
2005-04-16 22:20:36 +00:00
|
|
|
cmd->len = 8;
|
2005-05-01 23:05:40 +00:00
|
|
|
cmd->lun = lun;
|
2005-04-16 22:20:36 +00:00
|
|
|
cmd->done = ub_probe_done;
|
|
|
|
cmd->back = &compl;
|
|
|
|
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_lock_irqsave(sc->lock, flags);
|
2005-04-16 22:20:36 +00:00
|
|
|
cmd->tag = sc->tagcnt++;
|
|
|
|
|
|
|
|
rc = ub_submit_scsi(sc, cmd);
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_unlock_irqrestore(sc->lock, flags);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2006-05-26 03:08:50 +00:00
|
|
|
if (rc != 0)
|
2005-04-16 22:20:36 +00:00
|
|
|
goto err_submit;
|
|
|
|
|
|
|
|
wait_for_completion(&compl);
|
|
|
|
|
|
|
|
if (cmd->error != 0) {
|
|
|
|
rc = -EIO;
|
|
|
|
goto err_read;
|
|
|
|
}
|
|
|
|
if (cmd->act_len != 8) {
|
|
|
|
rc = -EIO;
|
|
|
|
goto err_read;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* sd.c special-cases sector size of 0 to mean 512. Needed? Safe? */
|
|
|
|
nsec = be32_to_cpu(*(__be32 *)p) + 1;
|
|
|
|
bsize = be32_to_cpu(*(__be32 *)(p + 4));
|
|
|
|
switch (bsize) {
|
|
|
|
case 512: shift = 0; break;
|
|
|
|
case 1024: shift = 1; break;
|
|
|
|
case 2048: shift = 2; break;
|
|
|
|
case 4096: shift = 3; break;
|
|
|
|
default:
|
|
|
|
rc = -EDOM;
|
|
|
|
goto err_inv_bsize;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret->bsize = bsize;
|
|
|
|
ret->bshift = shift;
|
|
|
|
ret->nsec = nsec << shift;
|
|
|
|
rc = 0;
|
|
|
|
|
|
|
|
err_inv_bsize:
|
|
|
|
err_read:
|
|
|
|
err_submit:
|
|
|
|
kfree(cmd);
|
|
|
|
err_alloc:
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
*/
|
IRQ: Maintain regs pointer globally rather than passing to IRQ handlers
Maintain a per-CPU global "struct pt_regs *" variable which can be used instead
of passing regs around manually through all ~1800 interrupt handlers in the
Linux kernel.
The regs pointer is used in few places, but it potentially costs both stack
space and code to pass it around. On the FRV arch, removing the regs parameter
from all the genirq function results in a 20% speed up of the IRQ exit path
(ie: from leaving timer_interrupt() to leaving do_IRQ()).
Where appropriate, an arch may override the generic storage facility and do
something different with the variable. On FRV, for instance, the address is
maintained in GR28 at all times inside the kernel as part of general exception
handling.
Having looked over the code, it appears that the parameter may be handed down
through up to twenty or so layers of functions. Consider a USB character
device attached to a USB hub, attached to a USB controller that posts its
interrupts through a cascaded auxiliary interrupt controller. A character
device driver may want to pass regs to the sysrq handler through the input
layer which adds another few layers of parameter passing.
I've build this code with allyesconfig for x86_64 and i386. I've runtested the
main part of the code on FRV and i386, though I can't test most of the drivers.
I've also done partial conversion for powerpc and MIPS - these at least compile
with minimal configurations.
This will affect all archs. Mostly the changes should be relatively easy.
Take do_IRQ(), store the regs pointer at the beginning, saving the old one:
struct pt_regs *old_regs = set_irq_regs(regs);
And put the old one back at the end:
set_irq_regs(old_regs);
Don't pass regs through to generic_handle_irq() or __do_IRQ().
In timer_interrupt(), this sort of change will be necessary:
- update_process_times(user_mode(regs));
- profile_tick(CPU_PROFILING, regs);
+ update_process_times(user_mode(get_irq_regs()));
+ profile_tick(CPU_PROFILING);
I'd like to move update_process_times()'s use of get_irq_regs() into itself,
except that i386, alone of the archs, uses something other than user_mode().
Some notes on the interrupt handling in the drivers:
(*) input_dev() is now gone entirely. The regs pointer is no longer stored in
the input_dev struct.
(*) finish_unlinks() in drivers/usb/host/ohci-q.c needs checking. It does
something different depending on whether it's been supplied with a regs
pointer or not.
(*) Various IRQ handler function pointers have been moved to type
irq_handler_t.
Signed-Off-By: David Howells <dhowells@redhat.com>
(cherry picked from 1b16e7ac850969f38b375e511e3fa2f474a33867 commit)
2006-10-05 13:55:46 +00:00
|
|
|
static void ub_probe_urb_complete(struct urb *urb)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct completion *cop = urb->context;
|
|
|
|
complete(cop);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ub_probe_timeout(unsigned long arg)
|
|
|
|
{
|
|
|
|
struct completion *cop = (struct completion *) arg;
|
|
|
|
complete(cop);
|
|
|
|
}
|
|
|
|
|
2006-01-05 08:26:30 +00:00
|
|
|
/*
|
|
|
|
* Reset with a Bulk reset.
|
|
|
|
*/
|
|
|
|
static int ub_sync_reset(struct ub_dev *sc)
|
|
|
|
{
|
|
|
|
int ifnum = sc->intf->cur_altsetting->desc.bInterfaceNumber;
|
|
|
|
struct usb_ctrlrequest *cr;
|
|
|
|
struct completion compl;
|
|
|
|
struct timer_list timer;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
init_completion(&compl);
|
|
|
|
|
|
|
|
cr = &sc->work_cr;
|
|
|
|
cr->bRequestType = USB_TYPE_CLASS | USB_RECIP_INTERFACE;
|
|
|
|
cr->bRequest = US_BULK_RESET_REQUEST;
|
|
|
|
cr->wValue = cpu_to_le16(0);
|
|
|
|
cr->wIndex = cpu_to_le16(ifnum);
|
|
|
|
cr->wLength = cpu_to_le16(0);
|
|
|
|
|
|
|
|
usb_fill_control_urb(&sc->work_urb, sc->dev, sc->send_ctrl_pipe,
|
|
|
|
(unsigned char*) cr, NULL, 0, ub_probe_urb_complete, &compl);
|
|
|
|
|
|
|
|
if ((rc = usb_submit_urb(&sc->work_urb, GFP_KERNEL)) != 0) {
|
|
|
|
printk(KERN_WARNING
|
|
|
|
"%s: Unable to submit a bulk reset (%d)\n", sc->name, rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
init_timer(&timer);
|
|
|
|
timer.function = ub_probe_timeout;
|
|
|
|
timer.data = (unsigned long) &compl;
|
|
|
|
timer.expires = jiffies + UB_CTRL_TIMEOUT;
|
|
|
|
add_timer(&timer);
|
|
|
|
|
|
|
|
wait_for_completion(&compl);
|
|
|
|
|
|
|
|
del_timer_sync(&timer);
|
|
|
|
usb_kill_urb(&sc->work_urb);
|
|
|
|
|
|
|
|
return sc->work_urb.status;
|
|
|
|
}
|
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
/*
|
|
|
|
* Get number of LUNs by the way of Bulk GetMaxLUN command.
|
|
|
|
*/
|
|
|
|
static int ub_sync_getmaxlun(struct ub_dev *sc)
|
|
|
|
{
|
|
|
|
int ifnum = sc->intf->cur_altsetting->desc.bInterfaceNumber;
|
|
|
|
unsigned char *p;
|
|
|
|
enum { ALLOC_SIZE = 1 };
|
|
|
|
struct usb_ctrlrequest *cr;
|
|
|
|
struct completion compl;
|
|
|
|
struct timer_list timer;
|
|
|
|
int nluns;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
init_completion(&compl);
|
|
|
|
|
|
|
|
rc = -ENOMEM;
|
|
|
|
if ((p = kmalloc(ALLOC_SIZE, GFP_KERNEL)) == NULL)
|
|
|
|
goto err_alloc;
|
|
|
|
*p = 55;
|
|
|
|
|
|
|
|
cr = &sc->work_cr;
|
|
|
|
cr->bRequestType = USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE;
|
|
|
|
cr->bRequest = US_BULK_GET_MAX_LUN;
|
|
|
|
cr->wValue = cpu_to_le16(0);
|
|
|
|
cr->wIndex = cpu_to_le16(ifnum);
|
|
|
|
cr->wLength = cpu_to_le16(1);
|
|
|
|
|
|
|
|
usb_fill_control_urb(&sc->work_urb, sc->dev, sc->recv_ctrl_pipe,
|
|
|
|
(unsigned char*) cr, p, 1, ub_probe_urb_complete, &compl);
|
|
|
|
|
2006-05-26 03:08:50 +00:00
|
|
|
if ((rc = usb_submit_urb(&sc->work_urb, GFP_KERNEL)) != 0)
|
2005-05-01 23:05:40 +00:00
|
|
|
goto err_submit;
|
|
|
|
|
|
|
|
init_timer(&timer);
|
|
|
|
timer.function = ub_probe_timeout;
|
|
|
|
timer.data = (unsigned long) &compl;
|
|
|
|
timer.expires = jiffies + UB_CTRL_TIMEOUT;
|
|
|
|
add_timer(&timer);
|
|
|
|
|
|
|
|
wait_for_completion(&compl);
|
|
|
|
|
|
|
|
del_timer_sync(&timer);
|
|
|
|
usb_kill_urb(&sc->work_urb);
|
|
|
|
|
2006-05-26 03:08:50 +00:00
|
|
|
if ((rc = sc->work_urb.status) < 0)
|
2005-09-22 07:48:29 +00:00
|
|
|
goto err_io;
|
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
if (sc->work_urb.actual_length != 1) {
|
|
|
|
nluns = 0;
|
|
|
|
} else {
|
|
|
|
if ((nluns = *p) == 55) {
|
|
|
|
nluns = 0;
|
|
|
|
} else {
|
|
|
|
/* GetMaxLUN returns the maximum LUN number */
|
|
|
|
nluns += 1;
|
|
|
|
if (nluns > UB_MAX_LUNS)
|
|
|
|
nluns = UB_MAX_LUNS;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
kfree(p);
|
|
|
|
return nluns;
|
|
|
|
|
2005-09-22 07:48:29 +00:00
|
|
|
err_io:
|
2005-05-01 23:05:40 +00:00
|
|
|
err_submit:
|
|
|
|
kfree(p);
|
|
|
|
err_alloc:
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Clear initial stalls.
|
|
|
|
*/
|
|
|
|
static int ub_probe_clear_stall(struct ub_dev *sc, int stalled_pipe)
|
|
|
|
{
|
|
|
|
int endp;
|
|
|
|
struct usb_ctrlrequest *cr;
|
|
|
|
struct completion compl;
|
|
|
|
struct timer_list timer;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
init_completion(&compl);
|
|
|
|
|
|
|
|
endp = usb_pipeendpoint(stalled_pipe);
|
|
|
|
if (usb_pipein (stalled_pipe))
|
|
|
|
endp |= USB_DIR_IN;
|
|
|
|
|
|
|
|
cr = &sc->work_cr;
|
|
|
|
cr->bRequestType = USB_RECIP_ENDPOINT;
|
|
|
|
cr->bRequest = USB_REQ_CLEAR_FEATURE;
|
|
|
|
cr->wValue = cpu_to_le16(USB_ENDPOINT_HALT);
|
|
|
|
cr->wIndex = cpu_to_le16(endp);
|
|
|
|
cr->wLength = cpu_to_le16(0);
|
|
|
|
|
|
|
|
usb_fill_control_urb(&sc->work_urb, sc->dev, sc->send_ctrl_pipe,
|
|
|
|
(unsigned char*) cr, NULL, 0, ub_probe_urb_complete, &compl);
|
|
|
|
|
|
|
|
if ((rc = usb_submit_urb(&sc->work_urb, GFP_KERNEL)) != 0) {
|
|
|
|
printk(KERN_WARNING
|
|
|
|
"%s: Unable to submit a probe clear (%d)\n", sc->name, rc);
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
init_timer(&timer);
|
|
|
|
timer.function = ub_probe_timeout;
|
|
|
|
timer.data = (unsigned long) &compl;
|
|
|
|
timer.expires = jiffies + UB_CTRL_TIMEOUT;
|
|
|
|
add_timer(&timer);
|
|
|
|
|
|
|
|
wait_for_completion(&compl);
|
|
|
|
|
|
|
|
del_timer_sync(&timer);
|
|
|
|
usb_kill_urb(&sc->work_urb);
|
|
|
|
|
2009-04-08 17:36:28 +00:00
|
|
|
usb_reset_endpoint(sc->dev, endp);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get the pipe settings.
|
|
|
|
*/
|
|
|
|
static int ub_get_pipes(struct ub_dev *sc, struct usb_device *dev,
|
|
|
|
struct usb_interface *intf)
|
|
|
|
{
|
|
|
|
struct usb_host_interface *altsetting = intf->cur_altsetting;
|
|
|
|
struct usb_endpoint_descriptor *ep_in = NULL;
|
|
|
|
struct usb_endpoint_descriptor *ep_out = NULL;
|
|
|
|
struct usb_endpoint_descriptor *ep;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Find the endpoints we need.
|
|
|
|
* We are expecting a minimum of 2 endpoints - in and out (bulk).
|
|
|
|
* We will ignore any others.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < altsetting->desc.bNumEndpoints; i++) {
|
|
|
|
ep = &altsetting->endpoint[i].desc;
|
|
|
|
|
|
|
|
/* Is it a BULK endpoint? */
|
2008-12-29 10:19:10 +00:00
|
|
|
if (usb_endpoint_xfer_bulk(ep)) {
|
2005-04-16 22:20:36 +00:00
|
|
|
/* BULK in or out? */
|
2008-12-29 10:19:10 +00:00
|
|
|
if (usb_endpoint_dir_in(ep)) {
|
2007-03-09 03:56:23 +00:00
|
|
|
if (ep_in == NULL)
|
|
|
|
ep_in = ep;
|
|
|
|
} else {
|
|
|
|
if (ep_out == NULL)
|
|
|
|
ep_out = ep;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ep_in == NULL || ep_out == NULL) {
|
2008-04-19 21:45:24 +00:00
|
|
|
printk(KERN_NOTICE "%s: failed endpoint check\n", sc->name);
|
2005-12-17 10:16:43 +00:00
|
|
|
return -ENODEV;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Calculate and store the pipe values */
|
|
|
|
sc->send_ctrl_pipe = usb_sndctrlpipe(dev, 0);
|
|
|
|
sc->recv_ctrl_pipe = usb_rcvctrlpipe(dev, 0);
|
|
|
|
sc->send_bulk_pipe = usb_sndbulkpipe(dev,
|
2008-12-29 10:19:10 +00:00
|
|
|
usb_endpoint_num(ep_out));
|
2005-04-16 22:20:36 +00:00
|
|
|
sc->recv_bulk_pipe = usb_rcvbulkpipe(dev,
|
2008-12-29 10:19:10 +00:00
|
|
|
usb_endpoint_num(ep_in));
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Probing is done in the process context, which allows us to cheat
|
|
|
|
* and not to build a state machine for the discovery.
|
|
|
|
*/
|
|
|
|
static int ub_probe(struct usb_interface *intf,
|
|
|
|
const struct usb_device_id *dev_id)
|
|
|
|
{
|
|
|
|
struct ub_dev *sc;
|
2005-05-01 23:05:40 +00:00
|
|
|
int nluns;
|
2005-04-16 22:20:36 +00:00
|
|
|
int rc;
|
|
|
|
int i;
|
|
|
|
|
2005-10-23 03:15:09 +00:00
|
|
|
if (usb_usual_check_type(dev_id, USB_US_TYPE_UB))
|
|
|
|
return -ENXIO;
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
rc = -ENOMEM;
|
2006-02-14 04:35:57 +00:00
|
|
|
if ((sc = kzalloc(sizeof(struct ub_dev), GFP_KERNEL)) == NULL)
|
2005-04-16 22:20:36 +00:00
|
|
|
goto err_core;
|
2005-12-28 22:22:17 +00:00
|
|
|
sc->lock = ub_next_lock();
|
2005-05-01 23:05:40 +00:00
|
|
|
INIT_LIST_HEAD(&sc->luns);
|
2005-04-16 22:20:36 +00:00
|
|
|
usb_init_urb(&sc->work_urb);
|
|
|
|
tasklet_init(&sc->tasklet, ub_scsi_action, (unsigned long)sc);
|
|
|
|
atomic_set(&sc->poison, 0);
|
2006-11-22 14:57:56 +00:00
|
|
|
INIT_WORK(&sc->reset_work, ub_reset_task);
|
2005-12-17 10:16:43 +00:00
|
|
|
init_waitqueue_head(&sc->reset_wait);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
init_timer(&sc->work_timer);
|
|
|
|
sc->work_timer.data = (unsigned long) sc;
|
|
|
|
sc->work_timer.function = ub_urb_timeout;
|
|
|
|
|
|
|
|
ub_init_completion(&sc->work_done);
|
|
|
|
sc->work_done.done = 1; /* A little yuk, but oh well... */
|
|
|
|
|
|
|
|
sc->dev = interface_to_usbdev(intf);
|
|
|
|
sc->intf = intf;
|
|
|
|
// sc->ifnum = intf->cur_altsetting->desc.bInterfaceNumber;
|
|
|
|
usb_set_intfdata(intf, sc);
|
|
|
|
usb_get_dev(sc->dev);
|
2006-05-03 07:16:00 +00:00
|
|
|
/*
|
|
|
|
* Since we give the interface struct to the block level through
|
|
|
|
* disk->driverfs_dev, we have to pin it. Otherwise, block_uevent
|
|
|
|
* oopses on close after a disconnect (kernels 2.6.16 and up).
|
|
|
|
*/
|
|
|
|
usb_get_intf(sc->intf);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
snprintf(sc->name, 12, DRV_NAME "(%d.%d)",
|
|
|
|
sc->dev->bus->busnum, sc->dev->devnum);
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/* XXX Verify that we can handle the device (from descriptors) */
|
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
if (ub_get_pipes(sc, sc->dev, intf) != 0)
|
|
|
|
goto err_dev_desc;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* At this point, all USB initialization is done, do upper layer.
|
|
|
|
* We really hate halfway initialized structures, so from the
|
|
|
|
* invariants perspective, this ub_dev is fully constructed at
|
|
|
|
* this point.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This is needed to clear toggles. It is a problem only if we do
|
|
|
|
* `rmmod ub && modprobe ub` without disconnects, but we like that.
|
|
|
|
*/
|
2005-09-22 07:49:45 +00:00
|
|
|
#if 0 /* iPod Mini fails if we do this (big white iPod works) */
|
2005-04-16 22:20:36 +00:00
|
|
|
ub_probe_clear_stall(sc, sc->recv_bulk_pipe);
|
|
|
|
ub_probe_clear_stall(sc, sc->send_bulk_pipe);
|
2005-09-22 07:49:45 +00:00
|
|
|
#endif
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* The way this is used by the startup code is a little specific.
|
|
|
|
* A SCSI check causes a USB stall. Our common case code sees it
|
|
|
|
* and clears the check, after which the device is ready for use.
|
|
|
|
* But if a check was not present, any command other than
|
|
|
|
* TEST_UNIT_READY ends with a lockup (including REQUEST_SENSE).
|
|
|
|
*
|
|
|
|
* If we neglect to clear the SCSI check, the first real command fails
|
|
|
|
* (which is the capacity readout). We clear that and retry, but why
|
|
|
|
* causing spurious retries for no reason.
|
|
|
|
*
|
|
|
|
* Revalidation may start with its own TEST_UNIT_READY, but that one
|
|
|
|
* has to succeed, so we clear checks with an additional one here.
|
|
|
|
* In any case it's not our business how revaliadation is implemented.
|
|
|
|
*/
|
2006-05-26 03:08:50 +00:00
|
|
|
for (i = 0; i < 3; i++) { /* Retries for the schwag key from KS'04 */
|
2005-05-01 23:05:40 +00:00
|
|
|
if ((rc = ub_sync_tur(sc, NULL)) <= 0) break;
|
2005-04-16 22:20:36 +00:00
|
|
|
if (rc != 0x6) break;
|
|
|
|
msleep(10);
|
|
|
|
}
|
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
nluns = 1;
|
|
|
|
for (i = 0; i < 3; i++) {
|
2006-03-03 00:53:00 +00:00
|
|
|
if ((rc = ub_sync_getmaxlun(sc)) < 0)
|
2005-05-01 23:05:40 +00:00
|
|
|
break;
|
|
|
|
if (rc != 0) {
|
|
|
|
nluns = rc;
|
|
|
|
break;
|
|
|
|
}
|
2005-06-06 20:54:59 +00:00
|
|
|
msleep(100);
|
2005-05-01 23:05:40 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
for (i = 0; i < nluns; i++) {
|
|
|
|
ub_probe_lun(sc, i);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
err_dev_desc:
|
2005-05-01 23:05:40 +00:00
|
|
|
usb_set_intfdata(intf, NULL);
|
2006-05-03 07:16:00 +00:00
|
|
|
usb_put_intf(sc->intf);
|
2005-05-01 23:05:40 +00:00
|
|
|
usb_put_dev(sc->dev);
|
|
|
|
kfree(sc);
|
|
|
|
err_core:
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int ub_probe_lun(struct ub_dev *sc, int lnum)
|
|
|
|
{
|
|
|
|
struct ub_lun *lun;
|
2007-07-24 07:28:11 +00:00
|
|
|
struct request_queue *q;
|
2005-05-01 23:05:40 +00:00
|
|
|
struct gendisk *disk;
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
rc = -ENOMEM;
|
2006-02-14 04:35:57 +00:00
|
|
|
if ((lun = kzalloc(sizeof(struct ub_lun), GFP_KERNEL)) == NULL)
|
2005-05-01 23:05:40 +00:00
|
|
|
goto err_alloc;
|
|
|
|
lun->num = lnum;
|
|
|
|
|
|
|
|
rc = -ENOSR;
|
|
|
|
if ((lun->id = ub_id_get()) == -1)
|
|
|
|
goto err_id;
|
|
|
|
|
|
|
|
lun->udev = sc;
|
|
|
|
|
|
|
|
snprintf(lun->name, 16, DRV_NAME "%c(%d.%d.%d)",
|
|
|
|
lun->id + 'a', sc->dev->bus->busnum, sc->dev->devnum, lun->num);
|
|
|
|
|
|
|
|
lun->removable = 1; /* XXX Query this from the device */
|
|
|
|
lun->changed = 1; /* ub_revalidate clears only */
|
|
|
|
ub_revalidate(sc, lun);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
rc = -ENOMEM;
|
2005-12-17 10:34:12 +00:00
|
|
|
if ((disk = alloc_disk(UB_PARTS_PER_LUN)) == NULL)
|
2005-04-16 22:20:36 +00:00
|
|
|
goto err_diskalloc;
|
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
sprintf(disk->disk_name, DRV_NAME "%c", lun->id + 'a');
|
2005-04-16 22:20:36 +00:00
|
|
|
disk->major = UB_MAJOR;
|
2005-12-17 10:34:12 +00:00
|
|
|
disk->first_minor = lun->id * UB_PARTS_PER_LUN;
|
2005-04-16 22:20:36 +00:00
|
|
|
disk->fops = &ub_bd_fops;
|
2011-03-09 18:54:28 +00:00
|
|
|
disk->events = DISK_EVENT_MEDIA_CHANGE;
|
2005-05-01 23:05:40 +00:00
|
|
|
disk->private_data = lun;
|
2005-09-22 07:48:29 +00:00
|
|
|
disk->driverfs_dev = &sc->intf->dev;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
rc = -ENOMEM;
|
2005-12-28 22:22:17 +00:00
|
|
|
if ((q = blk_init_queue(ub_request_fn, sc->lock)) == NULL)
|
2005-04-16 22:20:36 +00:00
|
|
|
goto err_blkqinit;
|
|
|
|
|
|
|
|
disk->queue = q;
|
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
blk_queue_bounce_limit(q, BLK_BOUNCE_HIGH);
|
2010-02-26 05:20:39 +00:00
|
|
|
blk_queue_max_segments(q, UB_MAX_REQ_SG);
|
2005-05-01 23:05:40 +00:00
|
|
|
blk_queue_segment_boundary(q, 0xffffffff); /* Dubious. */
|
2010-02-26 05:20:38 +00:00
|
|
|
blk_queue_max_hw_sectors(q, UB_MAX_SECTORS);
|
2009-05-22 21:17:49 +00:00
|
|
|
blk_queue_logical_block_size(q, lun->capacity.bsize);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2006-05-26 03:04:54 +00:00
|
|
|
lun->disk = disk;
|
2005-05-01 23:05:40 +00:00
|
|
|
q->queuedata = lun;
|
2006-05-26 03:04:54 +00:00
|
|
|
list_add(&lun->link, &sc->luns);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-05-01 23:05:40 +00:00
|
|
|
set_capacity(disk, lun->capacity.nsec);
|
|
|
|
if (lun->removable)
|
2005-04-16 22:20:36 +00:00
|
|
|
disk->flags |= GENHD_FL_REMOVABLE;
|
|
|
|
|
|
|
|
add_disk(disk);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_blkqinit:
|
|
|
|
put_disk(disk);
|
|
|
|
err_diskalloc:
|
2005-05-01 23:05:40 +00:00
|
|
|
ub_id_put(lun->id);
|
2005-04-16 22:20:36 +00:00
|
|
|
err_id:
|
2005-05-01 23:05:40 +00:00
|
|
|
kfree(lun);
|
|
|
|
err_alloc:
|
2005-04-16 22:20:36 +00:00
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void ub_disconnect(struct usb_interface *intf)
|
|
|
|
{
|
|
|
|
struct ub_dev *sc = usb_get_intfdata(intf);
|
2005-05-01 23:05:40 +00:00
|
|
|
struct ub_lun *lun;
|
2005-04-16 22:20:36 +00:00
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Prevent ub_bd_release from pulling the rug from under us.
|
|
|
|
* XXX This is starting to look like a kref.
|
|
|
|
* XXX Why not to take this ref at probe time?
|
|
|
|
*/
|
|
|
|
spin_lock_irqsave(&ub_lock, flags);
|
|
|
|
sc->openc++;
|
|
|
|
spin_unlock_irqrestore(&ub_lock, flags);
|
|
|
|
|
|
|
|
/*
|
2008-04-19 21:45:24 +00:00
|
|
|
* Fence stall clearings, operations triggered by unlinkings and so on.
|
2005-04-16 22:20:36 +00:00
|
|
|
* We do not attempt to unlink any URBs, because we do not trust the
|
|
|
|
* unlink paths in HC drivers. Also, we get -84 upon disconnect anyway.
|
|
|
|
*/
|
|
|
|
atomic_set(&sc->poison, 1);
|
|
|
|
|
2005-12-17 10:16:43 +00:00
|
|
|
/*
|
|
|
|
* Wait for reset to end, if any.
|
|
|
|
*/
|
|
|
|
wait_event(sc->reset_wait, !sc->reset);
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Blow away queued commands.
|
|
|
|
*
|
|
|
|
* Actually, this never works, because before we get here
|
|
|
|
* the HCD terminates outstanding URB(s). It causes our
|
|
|
|
* SCSI command queue to advance, commands fail to submit,
|
|
|
|
* and the whole queue drains. So, we just use this code to
|
|
|
|
* print warnings.
|
|
|
|
*/
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_lock_irqsave(sc->lock, flags);
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct ub_scsi_cmd *cmd;
|
|
|
|
int cnt = 0;
|
2005-12-17 10:16:43 +00:00
|
|
|
while ((cmd = ub_cmdq_peek(sc)) != NULL) {
|
2005-04-16 22:20:36 +00:00
|
|
|
cmd->error = -ENOTCONN;
|
|
|
|
cmd->state = UB_CMDST_DONE;
|
|
|
|
ub_cmdq_pop(sc);
|
|
|
|
(*cmd->done)(sc, cmd);
|
|
|
|
cnt++;
|
|
|
|
}
|
|
|
|
if (cnt != 0) {
|
|
|
|
printk(KERN_WARNING "%s: "
|
|
|
|
"%d was queued after shutdown\n", sc->name, cnt);
|
|
|
|
}
|
|
|
|
}
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_unlock_irqrestore(sc->lock, flags);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Unregister the upper layer.
|
|
|
|
*/
|
2007-07-09 19:03:07 +00:00
|
|
|
list_for_each_entry(lun, &sc->luns, link) {
|
2006-05-26 03:04:54 +00:00
|
|
|
del_gendisk(lun->disk);
|
2005-05-01 23:05:40 +00:00
|
|
|
/*
|
|
|
|
* I wish I could do:
|
2008-04-29 12:48:33 +00:00
|
|
|
* queue_flag_set(QUEUE_FLAG_DEAD, q);
|
2005-05-01 23:05:40 +00:00
|
|
|
* As it is, we rely on our internal poisoning and let
|
|
|
|
* the upper levels to spin furiously failing all the I/O.
|
|
|
|
*/
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Testing for -EINPROGRESS is always a bug, so we are bending
|
|
|
|
* the rules a little.
|
|
|
|
*/
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_lock_irqsave(sc->lock, flags);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (sc->work_urb.status == -EINPROGRESS) { /* janitors: ignore */
|
|
|
|
printk(KERN_WARNING "%s: "
|
|
|
|
"URB is active after disconnect\n", sc->name);
|
|
|
|
}
|
2005-12-28 22:22:17 +00:00
|
|
|
spin_unlock_irqrestore(sc->lock, flags);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
2008-04-19 21:45:24 +00:00
|
|
|
* There is virtually no chance that other CPU runs a timeout so long
|
2005-04-16 22:20:36 +00:00
|
|
|
* after ub_urb_complete should have called del_timer, but only if HCD
|
|
|
|
* didn't forget to deliver a callback on unlink.
|
|
|
|
*/
|
|
|
|
del_timer_sync(&sc->work_timer);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* At this point there must be no commands coming from anyone
|
|
|
|
* and no URBs left in transit.
|
|
|
|
*/
|
|
|
|
|
|
|
|
ub_put(sc);
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct usb_driver ub_driver = {
|
|
|
|
.name = "ub",
|
|
|
|
.probe = ub_probe,
|
|
|
|
.disconnect = ub_disconnect,
|
|
|
|
.id_table = ub_usb_ids,
|
2008-11-11 04:11:11 +00:00
|
|
|
.pre_reset = ub_pre_reset,
|
|
|
|
.post_reset = ub_post_reset,
|
2005-04-16 22:20:36 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static int __init ub_init(void)
|
|
|
|
{
|
|
|
|
int rc;
|
2005-12-28 22:22:17 +00:00
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < UB_QLOCK_NUM; i++)
|
|
|
|
spin_lock_init(&ub_qlockv[i]);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
if ((rc = register_blkdev(UB_MAJOR, DRV_NAME)) != 0)
|
|
|
|
goto err_regblkdev;
|
|
|
|
|
|
|
|
if ((rc = usb_register(&ub_driver)) != 0)
|
|
|
|
goto err_register;
|
|
|
|
|
2005-10-23 03:15:09 +00:00
|
|
|
usb_usual_set_present(USB_US_TYPE_UB);
|
2005-04-16 22:20:36 +00:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
err_register:
|
|
|
|
unregister_blkdev(UB_MAJOR, DRV_NAME);
|
|
|
|
err_regblkdev:
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void __exit ub_exit(void)
|
|
|
|
{
|
|
|
|
usb_deregister(&ub_driver);
|
|
|
|
|
|
|
|
unregister_blkdev(UB_MAJOR, DRV_NAME);
|
2005-10-23 03:15:09 +00:00
|
|
|
usb_usual_clear_present(USB_US_TYPE_UB);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
module_init(ub_init);
|
|
|
|
module_exit(ub_exit);
|
|
|
|
|
|
|
|
MODULE_LICENSE("GPL");
|